GuReT: Distinguishing Guilt and Regret related Text (2401.16541v1)
Abstract: The intricate relationship between human decision-making and emotions, particularly guilt and regret, has significant implications on behavior and well-being. Yet, these emotions subtle distinctions and interplay are often overlooked in computational models. This paper introduces a dataset tailored to dissect the relationship between guilt and regret and their unique textual markers, filling a notable gap in affective computing research. Our approach treats guilt and regret recognition as a binary classification task and employs three machine learning and six transformer-based deep learning techniques to benchmark the newly created dataset. The study further implements innovative reasoning methods like chain-of-thought and tree-of-thought to assess the models interpretive logic. The results indicate a clear performance edge for transformer-based models, achieving a 90.4% macro F1 score compared to the 85.3% scored by the best machine learning classifier, demonstrating their superior capability in distinguishing complex emotional states.
- Keltner D, Lerner JS. Emotion. Handbook of Social Psychology. 2010;. Balouchzahi et al. [2023] Balouchzahi F, Sidorov G, Gelbukh A. PolyHope: Two-level hope speech detection from tweets. Expert Systems with Applications. 2023;225:120078. Lerner et al. [2015] Lerner JS, Li Y, Valdesolo P, Kassam KS. Emotion and decision making. Annual review of psychology. 2015;66:799–823. LANDMAN [1987] LANDMAN J. Regret: A Theoretical and Conceptual Analysis. Journal for the Theory of Social Behaviour. 1987;17(2):135–160. Schoeffler [1962] Schoeffler MS. Prediction of some stochastic events: A regret equalization model. Journal of experimental psychology. 1962;64(6):615. Mandel and Dhami [2005] Mandel DR, Dhami MK. “What I did” versus “what I might have done”: Effect of factual versus counterfactual thinking on blame, guilt, and shame in prisoners. Journal of Experimental Social Psychology. 2005;41(6):627–635. Berndsen et al. [2004] Berndsen M, van der Pligt J, Doosje B, Manstead A. Guilt and regret: The determining role of interpersonal and intrapersonal harm. Cognition and Emotion. 2004;18(1):55–70. Zeelenberg and Breugelmans [2008] Zeelenberg M, Breugelmans SM. The role of interpersonal harm in distinguishing regret from guilt. Emotion. 2008;8(5):589. Higgins [1987] Higgins ET. Self-discrepancy: a theory relating self and affect. Psychological review. 1987;94(3):319. Davidai and Gilovich [2018] Davidai S, Gilovich T. The ideal road not taken: The self-discrepancies involved in people’s most enduring regrets. Emotion. 2018;18(3):439. Zhang et al. [2021] Zhang X, Zeelenberg M, Summerville A, Breugelmans SM. The role of self-discrepancies in distinguishing regret from guilt. Self and Identity. 2021;20(3):388–405. Hoerl and McCormack [2016] Hoerl C, McCormack T. Making decisions about the future. Seeing the future: Theoretical perspectives on future-oriented mental time travel. 2016;p. 241. Lickel et al. [2014] Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Balouchzahi F, Sidorov G, Gelbukh A. PolyHope: Two-level hope speech detection from tweets. Expert Systems with Applications. 2023;225:120078. Lerner et al. [2015] Lerner JS, Li Y, Valdesolo P, Kassam KS. Emotion and decision making. Annual review of psychology. 2015;66:799–823. LANDMAN [1987] LANDMAN J. Regret: A Theoretical and Conceptual Analysis. Journal for the Theory of Social Behaviour. 1987;17(2):135–160. Schoeffler [1962] Schoeffler MS. Prediction of some stochastic events: A regret equalization model. Journal of experimental psychology. 1962;64(6):615. Mandel and Dhami [2005] Mandel DR, Dhami MK. “What I did” versus “what I might have done”: Effect of factual versus counterfactual thinking on blame, guilt, and shame in prisoners. Journal of Experimental Social Psychology. 2005;41(6):627–635. Berndsen et al. [2004] Berndsen M, van der Pligt J, Doosje B, Manstead A. Guilt and regret: The determining role of interpersonal and intrapersonal harm. Cognition and Emotion. 2004;18(1):55–70. Zeelenberg and Breugelmans [2008] Zeelenberg M, Breugelmans SM. The role of interpersonal harm in distinguishing regret from guilt. Emotion. 2008;8(5):589. Higgins [1987] Higgins ET. Self-discrepancy: a theory relating self and affect. Psychological review. 1987;94(3):319. Davidai and Gilovich [2018] Davidai S, Gilovich T. The ideal road not taken: The self-discrepancies involved in people’s most enduring regrets. Emotion. 2018;18(3):439. Zhang et al. [2021] Zhang X, Zeelenberg M, Summerville A, Breugelmans SM. The role of self-discrepancies in distinguishing regret from guilt. Self and Identity. 2021;20(3):388–405. Hoerl and McCormack [2016] Hoerl C, McCormack T. Making decisions about the future. Seeing the future: Theoretical perspectives on future-oriented mental time travel. 2016;p. 241. Lickel et al. [2014] Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lerner JS, Li Y, Valdesolo P, Kassam KS. Emotion and decision making. Annual review of psychology. 2015;66:799–823. LANDMAN [1987] LANDMAN J. Regret: A Theoretical and Conceptual Analysis. Journal for the Theory of Social Behaviour. 1987;17(2):135–160. Schoeffler [1962] Schoeffler MS. Prediction of some stochastic events: A regret equalization model. Journal of experimental psychology. 1962;64(6):615. Mandel and Dhami [2005] Mandel DR, Dhami MK. “What I did” versus “what I might have done”: Effect of factual versus counterfactual thinking on blame, guilt, and shame in prisoners. Journal of Experimental Social Psychology. 2005;41(6):627–635. Berndsen et al. [2004] Berndsen M, van der Pligt J, Doosje B, Manstead A. Guilt and regret: The determining role of interpersonal and intrapersonal harm. Cognition and Emotion. 2004;18(1):55–70. Zeelenberg and Breugelmans [2008] Zeelenberg M, Breugelmans SM. The role of interpersonal harm in distinguishing regret from guilt. Emotion. 2008;8(5):589. Higgins [1987] Higgins ET. Self-discrepancy: a theory relating self and affect. Psychological review. 1987;94(3):319. Davidai and Gilovich [2018] Davidai S, Gilovich T. The ideal road not taken: The self-discrepancies involved in people’s most enduring regrets. Emotion. 2018;18(3):439. Zhang et al. [2021] Zhang X, Zeelenberg M, Summerville A, Breugelmans SM. The role of self-discrepancies in distinguishing regret from guilt. Self and Identity. 2021;20(3):388–405. Hoerl and McCormack [2016] Hoerl C, McCormack T. Making decisions about the future. Seeing the future: Theoretical perspectives on future-oriented mental time travel. 2016;p. 241. Lickel et al. [2014] Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. LANDMAN J. Regret: A Theoretical and Conceptual Analysis. Journal for the Theory of Social Behaviour. 1987;17(2):135–160. Schoeffler [1962] Schoeffler MS. Prediction of some stochastic events: A regret equalization model. Journal of experimental psychology. 1962;64(6):615. Mandel and Dhami [2005] Mandel DR, Dhami MK. “What I did” versus “what I might have done”: Effect of factual versus counterfactual thinking on blame, guilt, and shame in prisoners. Journal of Experimental Social Psychology. 2005;41(6):627–635. Berndsen et al. [2004] Berndsen M, van der Pligt J, Doosje B, Manstead A. Guilt and regret: The determining role of interpersonal and intrapersonal harm. Cognition and Emotion. 2004;18(1):55–70. Zeelenberg and Breugelmans [2008] Zeelenberg M, Breugelmans SM. The role of interpersonal harm in distinguishing regret from guilt. Emotion. 2008;8(5):589. Higgins [1987] Higgins ET. Self-discrepancy: a theory relating self and affect. Psychological review. 1987;94(3):319. Davidai and Gilovich [2018] Davidai S, Gilovich T. The ideal road not taken: The self-discrepancies involved in people’s most enduring regrets. Emotion. 2018;18(3):439. Zhang et al. [2021] Zhang X, Zeelenberg M, Summerville A, Breugelmans SM. The role of self-discrepancies in distinguishing regret from guilt. Self and Identity. 2021;20(3):388–405. Hoerl and McCormack [2016] Hoerl C, McCormack T. Making decisions about the future. Seeing the future: Theoretical perspectives on future-oriented mental time travel. 2016;p. 241. Lickel et al. [2014] Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Schoeffler MS. Prediction of some stochastic events: A regret equalization model. Journal of experimental psychology. 1962;64(6):615. Mandel and Dhami [2005] Mandel DR, Dhami MK. “What I did” versus “what I might have done”: Effect of factual versus counterfactual thinking on blame, guilt, and shame in prisoners. Journal of Experimental Social Psychology. 2005;41(6):627–635. Berndsen et al. [2004] Berndsen M, van der Pligt J, Doosje B, Manstead A. Guilt and regret: The determining role of interpersonal and intrapersonal harm. Cognition and Emotion. 2004;18(1):55–70. Zeelenberg and Breugelmans [2008] Zeelenberg M, Breugelmans SM. The role of interpersonal harm in distinguishing regret from guilt. Emotion. 2008;8(5):589. Higgins [1987] Higgins ET. Self-discrepancy: a theory relating self and affect. Psychological review. 1987;94(3):319. Davidai and Gilovich [2018] Davidai S, Gilovich T. The ideal road not taken: The self-discrepancies involved in people’s most enduring regrets. Emotion. 2018;18(3):439. Zhang et al. [2021] Zhang X, Zeelenberg M, Summerville A, Breugelmans SM. The role of self-discrepancies in distinguishing regret from guilt. Self and Identity. 2021;20(3):388–405. Hoerl and McCormack [2016] Hoerl C, McCormack T. Making decisions about the future. Seeing the future: Theoretical perspectives on future-oriented mental time travel. 2016;p. 241. Lickel et al. [2014] Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Mandel DR, Dhami MK. “What I did” versus “what I might have done”: Effect of factual versus counterfactual thinking on blame, guilt, and shame in prisoners. Journal of Experimental Social Psychology. 2005;41(6):627–635. Berndsen et al. [2004] Berndsen M, van der Pligt J, Doosje B, Manstead A. Guilt and regret: The determining role of interpersonal and intrapersonal harm. Cognition and Emotion. 2004;18(1):55–70. Zeelenberg and Breugelmans [2008] Zeelenberg M, Breugelmans SM. The role of interpersonal harm in distinguishing regret from guilt. Emotion. 2008;8(5):589. Higgins [1987] Higgins ET. Self-discrepancy: a theory relating self and affect. Psychological review. 1987;94(3):319. Davidai and Gilovich [2018] Davidai S, Gilovich T. The ideal road not taken: The self-discrepancies involved in people’s most enduring regrets. Emotion. 2018;18(3):439. Zhang et al. [2021] Zhang X, Zeelenberg M, Summerville A, Breugelmans SM. The role of self-discrepancies in distinguishing regret from guilt. Self and Identity. 2021;20(3):388–405. Hoerl and McCormack [2016] Hoerl C, McCormack T. Making decisions about the future. Seeing the future: Theoretical perspectives on future-oriented mental time travel. 2016;p. 241. Lickel et al. [2014] Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Berndsen M, van der Pligt J, Doosje B, Manstead A. Guilt and regret: The determining role of interpersonal and intrapersonal harm. Cognition and Emotion. 2004;18(1):55–70. Zeelenberg and Breugelmans [2008] Zeelenberg M, Breugelmans SM. The role of interpersonal harm in distinguishing regret from guilt. Emotion. 2008;8(5):589. Higgins [1987] Higgins ET. Self-discrepancy: a theory relating self and affect. Psychological review. 1987;94(3):319. Davidai and Gilovich [2018] Davidai S, Gilovich T. The ideal road not taken: The self-discrepancies involved in people’s most enduring regrets. Emotion. 2018;18(3):439. Zhang et al. [2021] Zhang X, Zeelenberg M, Summerville A, Breugelmans SM. The role of self-discrepancies in distinguishing regret from guilt. Self and Identity. 2021;20(3):388–405. Hoerl and McCormack [2016] Hoerl C, McCormack T. Making decisions about the future. Seeing the future: Theoretical perspectives on future-oriented mental time travel. 2016;p. 241. Lickel et al. [2014] Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Zeelenberg M, Breugelmans SM. The role of interpersonal harm in distinguishing regret from guilt. Emotion. 2008;8(5):589. Higgins [1987] Higgins ET. Self-discrepancy: a theory relating self and affect. Psychological review. 1987;94(3):319. Davidai and Gilovich [2018] Davidai S, Gilovich T. The ideal road not taken: The self-discrepancies involved in people’s most enduring regrets. Emotion. 2018;18(3):439. Zhang et al. [2021] Zhang X, Zeelenberg M, Summerville A, Breugelmans SM. The role of self-discrepancies in distinguishing regret from guilt. Self and Identity. 2021;20(3):388–405. Hoerl and McCormack [2016] Hoerl C, McCormack T. Making decisions about the future. Seeing the future: Theoretical perspectives on future-oriented mental time travel. 2016;p. 241. Lickel et al. [2014] Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Higgins ET. Self-discrepancy: a theory relating self and affect. Psychological review. 1987;94(3):319. Davidai and Gilovich [2018] Davidai S, Gilovich T. The ideal road not taken: The self-discrepancies involved in people’s most enduring regrets. Emotion. 2018;18(3):439. Zhang et al. [2021] Zhang X, Zeelenberg M, Summerville A, Breugelmans SM. The role of self-discrepancies in distinguishing regret from guilt. Self and Identity. 2021;20(3):388–405. Hoerl and McCormack [2016] Hoerl C, McCormack T. Making decisions about the future. Seeing the future: Theoretical perspectives on future-oriented mental time travel. 2016;p. 241. Lickel et al. [2014] Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Davidai S, Gilovich T. The ideal road not taken: The self-discrepancies involved in people’s most enduring regrets. Emotion. 2018;18(3):439. Zhang et al. [2021] Zhang X, Zeelenberg M, Summerville A, Breugelmans SM. The role of self-discrepancies in distinguishing regret from guilt. Self and Identity. 2021;20(3):388–405. Hoerl and McCormack [2016] Hoerl C, McCormack T. Making decisions about the future. Seeing the future: Theoretical perspectives on future-oriented mental time travel. 2016;p. 241. Lickel et al. [2014] Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Zhang X, Zeelenberg M, Summerville A, Breugelmans SM. The role of self-discrepancies in distinguishing regret from guilt. Self and Identity. 2021;20(3):388–405. Hoerl and McCormack [2016] Hoerl C, McCormack T. Making decisions about the future. Seeing the future: Theoretical perspectives on future-oriented mental time travel. 2016;p. 241. Lickel et al. [2014] Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Hoerl C, McCormack T. Making decisions about the future. Seeing the future: Theoretical perspectives on future-oriented mental time travel. 2016;p. 241. Lickel et al. [2014] Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly.
- PolyHope: Two-level hope speech detection from tweets. Expert Systems with Applications. 2023;225:120078. Lerner et al. [2015] Lerner JS, Li Y, Valdesolo P, Kassam KS. Emotion and decision making. Annual review of psychology. 2015;66:799–823. LANDMAN [1987] LANDMAN J. Regret: A Theoretical and Conceptual Analysis. Journal for the Theory of Social Behaviour. 1987;17(2):135–160. Schoeffler [1962] Schoeffler MS. Prediction of some stochastic events: A regret equalization model. Journal of experimental psychology. 1962;64(6):615. Mandel and Dhami [2005] Mandel DR, Dhami MK. “What I did” versus “what I might have done”: Effect of factual versus counterfactual thinking on blame, guilt, and shame in prisoners. Journal of Experimental Social Psychology. 2005;41(6):627–635. Berndsen et al. [2004] Berndsen M, van der Pligt J, Doosje B, Manstead A. Guilt and regret: The determining role of interpersonal and intrapersonal harm. Cognition and Emotion. 2004;18(1):55–70. Zeelenberg and Breugelmans [2008] Zeelenberg M, Breugelmans SM. The role of interpersonal harm in distinguishing regret from guilt. Emotion. 2008;8(5):589. Higgins [1987] Higgins ET. Self-discrepancy: a theory relating self and affect. Psychological review. 1987;94(3):319. Davidai and Gilovich [2018] Davidai S, Gilovich T. The ideal road not taken: The self-discrepancies involved in people’s most enduring regrets. Emotion. 2018;18(3):439. Zhang et al. [2021] Zhang X, Zeelenberg M, Summerville A, Breugelmans SM. The role of self-discrepancies in distinguishing regret from guilt. Self and Identity. 2021;20(3):388–405. Hoerl and McCormack [2016] Hoerl C, McCormack T. Making decisions about the future. Seeing the future: Theoretical perspectives on future-oriented mental time travel. 2016;p. 241. Lickel et al. [2014] Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lerner JS, Li Y, Valdesolo P, Kassam KS. Emotion and decision making. Annual review of psychology. 2015;66:799–823. LANDMAN [1987] LANDMAN J. Regret: A Theoretical and Conceptual Analysis. Journal for the Theory of Social Behaviour. 1987;17(2):135–160. Schoeffler [1962] Schoeffler MS. Prediction of some stochastic events: A regret equalization model. Journal of experimental psychology. 1962;64(6):615. Mandel and Dhami [2005] Mandel DR, Dhami MK. “What I did” versus “what I might have done”: Effect of factual versus counterfactual thinking on blame, guilt, and shame in prisoners. Journal of Experimental Social Psychology. 2005;41(6):627–635. Berndsen et al. [2004] Berndsen M, van der Pligt J, Doosje B, Manstead A. Guilt and regret: The determining role of interpersonal and intrapersonal harm. Cognition and Emotion. 2004;18(1):55–70. Zeelenberg and Breugelmans [2008] Zeelenberg M, Breugelmans SM. The role of interpersonal harm in distinguishing regret from guilt. Emotion. 2008;8(5):589. Higgins [1987] Higgins ET. Self-discrepancy: a theory relating self and affect. Psychological review. 1987;94(3):319. Davidai and Gilovich [2018] Davidai S, Gilovich T. The ideal road not taken: The self-discrepancies involved in people’s most enduring regrets. Emotion. 2018;18(3):439. Zhang et al. [2021] Zhang X, Zeelenberg M, Summerville A, Breugelmans SM. The role of self-discrepancies in distinguishing regret from guilt. Self and Identity. 2021;20(3):388–405. Hoerl and McCormack [2016] Hoerl C, McCormack T. Making decisions about the future. Seeing the future: Theoretical perspectives on future-oriented mental time travel. 2016;p. 241. Lickel et al. [2014] Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. LANDMAN J. Regret: A Theoretical and Conceptual Analysis. Journal for the Theory of Social Behaviour. 1987;17(2):135–160. Schoeffler [1962] Schoeffler MS. Prediction of some stochastic events: A regret equalization model. Journal of experimental psychology. 1962;64(6):615. Mandel and Dhami [2005] Mandel DR, Dhami MK. “What I did” versus “what I might have done”: Effect of factual versus counterfactual thinking on blame, guilt, and shame in prisoners. Journal of Experimental Social Psychology. 2005;41(6):627–635. Berndsen et al. [2004] Berndsen M, van der Pligt J, Doosje B, Manstead A. Guilt and regret: The determining role of interpersonal and intrapersonal harm. Cognition and Emotion. 2004;18(1):55–70. Zeelenberg and Breugelmans [2008] Zeelenberg M, Breugelmans SM. The role of interpersonal harm in distinguishing regret from guilt. Emotion. 2008;8(5):589. Higgins [1987] Higgins ET. Self-discrepancy: a theory relating self and affect. Psychological review. 1987;94(3):319. Davidai and Gilovich [2018] Davidai S, Gilovich T. The ideal road not taken: The self-discrepancies involved in people’s most enduring regrets. Emotion. 2018;18(3):439. Zhang et al. [2021] Zhang X, Zeelenberg M, Summerville A, Breugelmans SM. The role of self-discrepancies in distinguishing regret from guilt. Self and Identity. 2021;20(3):388–405. Hoerl and McCormack [2016] Hoerl C, McCormack T. Making decisions about the future. Seeing the future: Theoretical perspectives on future-oriented mental time travel. 2016;p. 241. Lickel et al. [2014] Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Schoeffler MS. Prediction of some stochastic events: A regret equalization model. Journal of experimental psychology. 1962;64(6):615. Mandel and Dhami [2005] Mandel DR, Dhami MK. “What I did” versus “what I might have done”: Effect of factual versus counterfactual thinking on blame, guilt, and shame in prisoners. Journal of Experimental Social Psychology. 2005;41(6):627–635. Berndsen et al. [2004] Berndsen M, van der Pligt J, Doosje B, Manstead A. Guilt and regret: The determining role of interpersonal and intrapersonal harm. Cognition and Emotion. 2004;18(1):55–70. Zeelenberg and Breugelmans [2008] Zeelenberg M, Breugelmans SM. The role of interpersonal harm in distinguishing regret from guilt. Emotion. 2008;8(5):589. Higgins [1987] Higgins ET. Self-discrepancy: a theory relating self and affect. Psychological review. 1987;94(3):319. Davidai and Gilovich [2018] Davidai S, Gilovich T. The ideal road not taken: The self-discrepancies involved in people’s most enduring regrets. Emotion. 2018;18(3):439. Zhang et al. [2021] Zhang X, Zeelenberg M, Summerville A, Breugelmans SM. The role of self-discrepancies in distinguishing regret from guilt. Self and Identity. 2021;20(3):388–405. Hoerl and McCormack [2016] Hoerl C, McCormack T. Making decisions about the future. Seeing the future: Theoretical perspectives on future-oriented mental time travel. 2016;p. 241. Lickel et al. [2014] Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Mandel DR, Dhami MK. “What I did” versus “what I might have done”: Effect of factual versus counterfactual thinking on blame, guilt, and shame in prisoners. Journal of Experimental Social Psychology. 2005;41(6):627–635. Berndsen et al. [2004] Berndsen M, van der Pligt J, Doosje B, Manstead A. Guilt and regret: The determining role of interpersonal and intrapersonal harm. Cognition and Emotion. 2004;18(1):55–70. Zeelenberg and Breugelmans [2008] Zeelenberg M, Breugelmans SM. The role of interpersonal harm in distinguishing regret from guilt. Emotion. 2008;8(5):589. Higgins [1987] Higgins ET. Self-discrepancy: a theory relating self and affect. Psychological review. 1987;94(3):319. Davidai and Gilovich [2018] Davidai S, Gilovich T. The ideal road not taken: The self-discrepancies involved in people’s most enduring regrets. Emotion. 2018;18(3):439. Zhang et al. [2021] Zhang X, Zeelenberg M, Summerville A, Breugelmans SM. The role of self-discrepancies in distinguishing regret from guilt. Self and Identity. 2021;20(3):388–405. Hoerl and McCormack [2016] Hoerl C, McCormack T. Making decisions about the future. Seeing the future: Theoretical perspectives on future-oriented mental time travel. 2016;p. 241. Lickel et al. [2014] Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Berndsen M, van der Pligt J, Doosje B, Manstead A. Guilt and regret: The determining role of interpersonal and intrapersonal harm. Cognition and Emotion. 2004;18(1):55–70. Zeelenberg and Breugelmans [2008] Zeelenberg M, Breugelmans SM. The role of interpersonal harm in distinguishing regret from guilt. Emotion. 2008;8(5):589. Higgins [1987] Higgins ET. Self-discrepancy: a theory relating self and affect. Psychological review. 1987;94(3):319. Davidai and Gilovich [2018] Davidai S, Gilovich T. The ideal road not taken: The self-discrepancies involved in people’s most enduring regrets. Emotion. 2018;18(3):439. Zhang et al. [2021] Zhang X, Zeelenberg M, Summerville A, Breugelmans SM. The role of self-discrepancies in distinguishing regret from guilt. Self and Identity. 2021;20(3):388–405. Hoerl and McCormack [2016] Hoerl C, McCormack T. Making decisions about the future. Seeing the future: Theoretical perspectives on future-oriented mental time travel. 2016;p. 241. Lickel et al. [2014] Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Zeelenberg M, Breugelmans SM. The role of interpersonal harm in distinguishing regret from guilt. Emotion. 2008;8(5):589. Higgins [1987] Higgins ET. Self-discrepancy: a theory relating self and affect. Psychological review. 1987;94(3):319. Davidai and Gilovich [2018] Davidai S, Gilovich T. The ideal road not taken: The self-discrepancies involved in people’s most enduring regrets. Emotion. 2018;18(3):439. Zhang et al. [2021] Zhang X, Zeelenberg M, Summerville A, Breugelmans SM. The role of self-discrepancies in distinguishing regret from guilt. Self and Identity. 2021;20(3):388–405. Hoerl and McCormack [2016] Hoerl C, McCormack T. Making decisions about the future. Seeing the future: Theoretical perspectives on future-oriented mental time travel. 2016;p. 241. Lickel et al. [2014] Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Higgins ET. Self-discrepancy: a theory relating self and affect. Psychological review. 1987;94(3):319. Davidai and Gilovich [2018] Davidai S, Gilovich T. The ideal road not taken: The self-discrepancies involved in people’s most enduring regrets. Emotion. 2018;18(3):439. Zhang et al. [2021] Zhang X, Zeelenberg M, Summerville A, Breugelmans SM. The role of self-discrepancies in distinguishing regret from guilt. Self and Identity. 2021;20(3):388–405. Hoerl and McCormack [2016] Hoerl C, McCormack T. Making decisions about the future. Seeing the future: Theoretical perspectives on future-oriented mental time travel. 2016;p. 241. Lickel et al. [2014] Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Davidai S, Gilovich T. The ideal road not taken: The self-discrepancies involved in people’s most enduring regrets. Emotion. 2018;18(3):439. Zhang et al. [2021] Zhang X, Zeelenberg M, Summerville A, Breugelmans SM. The role of self-discrepancies in distinguishing regret from guilt. Self and Identity. 2021;20(3):388–405. Hoerl and McCormack [2016] Hoerl C, McCormack T. Making decisions about the future. Seeing the future: Theoretical perspectives on future-oriented mental time travel. 2016;p. 241. Lickel et al. [2014] Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Zhang X, Zeelenberg M, Summerville A, Breugelmans SM. The role of self-discrepancies in distinguishing regret from guilt. Self and Identity. 2021;20(3):388–405. Hoerl and McCormack [2016] Hoerl C, McCormack T. Making decisions about the future. Seeing the future: Theoretical perspectives on future-oriented mental time travel. 2016;p. 241. Lickel et al. [2014] Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Hoerl C, McCormack T. Making decisions about the future. Seeing the future: Theoretical perspectives on future-oriented mental time travel. 2016;p. 241. Lickel et al. [2014] Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly.
- Emotion and decision making. Annual review of psychology. 2015;66:799–823. LANDMAN [1987] LANDMAN J. Regret: A Theoretical and Conceptual Analysis. Journal for the Theory of Social Behaviour. 1987;17(2):135–160. Schoeffler [1962] Schoeffler MS. Prediction of some stochastic events: A regret equalization model. Journal of experimental psychology. 1962;64(6):615. Mandel and Dhami [2005] Mandel DR, Dhami MK. “What I did” versus “what I might have done”: Effect of factual versus counterfactual thinking on blame, guilt, and shame in prisoners. Journal of Experimental Social Psychology. 2005;41(6):627–635. Berndsen et al. [2004] Berndsen M, van der Pligt J, Doosje B, Manstead A. Guilt and regret: The determining role of interpersonal and intrapersonal harm. Cognition and Emotion. 2004;18(1):55–70. Zeelenberg and Breugelmans [2008] Zeelenberg M, Breugelmans SM. The role of interpersonal harm in distinguishing regret from guilt. Emotion. 2008;8(5):589. Higgins [1987] Higgins ET. Self-discrepancy: a theory relating self and affect. Psychological review. 1987;94(3):319. Davidai and Gilovich [2018] Davidai S, Gilovich T. The ideal road not taken: The self-discrepancies involved in people’s most enduring regrets. Emotion. 2018;18(3):439. Zhang et al. [2021] Zhang X, Zeelenberg M, Summerville A, Breugelmans SM. The role of self-discrepancies in distinguishing regret from guilt. Self and Identity. 2021;20(3):388–405. Hoerl and McCormack [2016] Hoerl C, McCormack T. Making decisions about the future. Seeing the future: Theoretical perspectives on future-oriented mental time travel. 2016;p. 241. Lickel et al. [2014] Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. LANDMAN J. Regret: A Theoretical and Conceptual Analysis. Journal for the Theory of Social Behaviour. 1987;17(2):135–160. Schoeffler [1962] Schoeffler MS. Prediction of some stochastic events: A regret equalization model. Journal of experimental psychology. 1962;64(6):615. Mandel and Dhami [2005] Mandel DR, Dhami MK. “What I did” versus “what I might have done”: Effect of factual versus counterfactual thinking on blame, guilt, and shame in prisoners. Journal of Experimental Social Psychology. 2005;41(6):627–635. Berndsen et al. [2004] Berndsen M, van der Pligt J, Doosje B, Manstead A. Guilt and regret: The determining role of interpersonal and intrapersonal harm. Cognition and Emotion. 2004;18(1):55–70. Zeelenberg and Breugelmans [2008] Zeelenberg M, Breugelmans SM. The role of interpersonal harm in distinguishing regret from guilt. Emotion. 2008;8(5):589. Higgins [1987] Higgins ET. Self-discrepancy: a theory relating self and affect. Psychological review. 1987;94(3):319. Davidai and Gilovich [2018] Davidai S, Gilovich T. The ideal road not taken: The self-discrepancies involved in people’s most enduring regrets. Emotion. 2018;18(3):439. Zhang et al. [2021] Zhang X, Zeelenberg M, Summerville A, Breugelmans SM. The role of self-discrepancies in distinguishing regret from guilt. Self and Identity. 2021;20(3):388–405. Hoerl and McCormack [2016] Hoerl C, McCormack T. Making decisions about the future. Seeing the future: Theoretical perspectives on future-oriented mental time travel. 2016;p. 241. Lickel et al. [2014] Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Schoeffler MS. Prediction of some stochastic events: A regret equalization model. Journal of experimental psychology. 1962;64(6):615. Mandel and Dhami [2005] Mandel DR, Dhami MK. “What I did” versus “what I might have done”: Effect of factual versus counterfactual thinking on blame, guilt, and shame in prisoners. Journal of Experimental Social Psychology. 2005;41(6):627–635. Berndsen et al. [2004] Berndsen M, van der Pligt J, Doosje B, Manstead A. Guilt and regret: The determining role of interpersonal and intrapersonal harm. Cognition and Emotion. 2004;18(1):55–70. Zeelenberg and Breugelmans [2008] Zeelenberg M, Breugelmans SM. The role of interpersonal harm in distinguishing regret from guilt. Emotion. 2008;8(5):589. Higgins [1987] Higgins ET. Self-discrepancy: a theory relating self and affect. Psychological review. 1987;94(3):319. Davidai and Gilovich [2018] Davidai S, Gilovich T. The ideal road not taken: The self-discrepancies involved in people’s most enduring regrets. Emotion. 2018;18(3):439. Zhang et al. [2021] Zhang X, Zeelenberg M, Summerville A, Breugelmans SM. The role of self-discrepancies in distinguishing regret from guilt. Self and Identity. 2021;20(3):388–405. Hoerl and McCormack [2016] Hoerl C, McCormack T. Making decisions about the future. Seeing the future: Theoretical perspectives on future-oriented mental time travel. 2016;p. 241. Lickel et al. [2014] Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Mandel DR, Dhami MK. “What I did” versus “what I might have done”: Effect of factual versus counterfactual thinking on blame, guilt, and shame in prisoners. Journal of Experimental Social Psychology. 2005;41(6):627–635. Berndsen et al. [2004] Berndsen M, van der Pligt J, Doosje B, Manstead A. Guilt and regret: The determining role of interpersonal and intrapersonal harm. Cognition and Emotion. 2004;18(1):55–70. Zeelenberg and Breugelmans [2008] Zeelenberg M, Breugelmans SM. The role of interpersonal harm in distinguishing regret from guilt. Emotion. 2008;8(5):589. Higgins [1987] Higgins ET. Self-discrepancy: a theory relating self and affect. Psychological review. 1987;94(3):319. Davidai and Gilovich [2018] Davidai S, Gilovich T. The ideal road not taken: The self-discrepancies involved in people’s most enduring regrets. Emotion. 2018;18(3):439. Zhang et al. [2021] Zhang X, Zeelenberg M, Summerville A, Breugelmans SM. The role of self-discrepancies in distinguishing regret from guilt. Self and Identity. 2021;20(3):388–405. Hoerl and McCormack [2016] Hoerl C, McCormack T. Making decisions about the future. Seeing the future: Theoretical perspectives on future-oriented mental time travel. 2016;p. 241. Lickel et al. [2014] Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Berndsen M, van der Pligt J, Doosje B, Manstead A. Guilt and regret: The determining role of interpersonal and intrapersonal harm. Cognition and Emotion. 2004;18(1):55–70. Zeelenberg and Breugelmans [2008] Zeelenberg M, Breugelmans SM. The role of interpersonal harm in distinguishing regret from guilt. Emotion. 2008;8(5):589. Higgins [1987] Higgins ET. Self-discrepancy: a theory relating self and affect. Psychological review. 1987;94(3):319. Davidai and Gilovich [2018] Davidai S, Gilovich T. The ideal road not taken: The self-discrepancies involved in people’s most enduring regrets. Emotion. 2018;18(3):439. Zhang et al. [2021] Zhang X, Zeelenberg M, Summerville A, Breugelmans SM. The role of self-discrepancies in distinguishing regret from guilt. Self and Identity. 2021;20(3):388–405. Hoerl and McCormack [2016] Hoerl C, McCormack T. Making decisions about the future. Seeing the future: Theoretical perspectives on future-oriented mental time travel. 2016;p. 241. Lickel et al. [2014] Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Zeelenberg M, Breugelmans SM. The role of interpersonal harm in distinguishing regret from guilt. Emotion. 2008;8(5):589. Higgins [1987] Higgins ET. Self-discrepancy: a theory relating self and affect. Psychological review. 1987;94(3):319. Davidai and Gilovich [2018] Davidai S, Gilovich T. The ideal road not taken: The self-discrepancies involved in people’s most enduring regrets. Emotion. 2018;18(3):439. Zhang et al. [2021] Zhang X, Zeelenberg M, Summerville A, Breugelmans SM. The role of self-discrepancies in distinguishing regret from guilt. Self and Identity. 2021;20(3):388–405. Hoerl and McCormack [2016] Hoerl C, McCormack T. Making decisions about the future. Seeing the future: Theoretical perspectives on future-oriented mental time travel. 2016;p. 241. Lickel et al. [2014] Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Higgins ET. Self-discrepancy: a theory relating self and affect. Psychological review. 1987;94(3):319. Davidai and Gilovich [2018] Davidai S, Gilovich T. The ideal road not taken: The self-discrepancies involved in people’s most enduring regrets. Emotion. 2018;18(3):439. Zhang et al. [2021] Zhang X, Zeelenberg M, Summerville A, Breugelmans SM. The role of self-discrepancies in distinguishing regret from guilt. Self and Identity. 2021;20(3):388–405. Hoerl and McCormack [2016] Hoerl C, McCormack T. Making decisions about the future. Seeing the future: Theoretical perspectives on future-oriented mental time travel. 2016;p. 241. Lickel et al. [2014] Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Davidai S, Gilovich T. The ideal road not taken: The self-discrepancies involved in people’s most enduring regrets. Emotion. 2018;18(3):439. Zhang et al. [2021] Zhang X, Zeelenberg M, Summerville A, Breugelmans SM. The role of self-discrepancies in distinguishing regret from guilt. Self and Identity. 2021;20(3):388–405. Hoerl and McCormack [2016] Hoerl C, McCormack T. Making decisions about the future. Seeing the future: Theoretical perspectives on future-oriented mental time travel. 2016;p. 241. Lickel et al. [2014] Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Zhang X, Zeelenberg M, Summerville A, Breugelmans SM. The role of self-discrepancies in distinguishing regret from guilt. Self and Identity. 2021;20(3):388–405. Hoerl and McCormack [2016] Hoerl C, McCormack T. Making decisions about the future. Seeing the future: Theoretical perspectives on future-oriented mental time travel. 2016;p. 241. Lickel et al. [2014] Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Hoerl C, McCormack T. Making decisions about the future. Seeing the future: Theoretical perspectives on future-oriented mental time travel. 2016;p. 241. Lickel et al. [2014] Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly.
- LANDMAN J. Regret: A Theoretical and Conceptual Analysis. Journal for the Theory of Social Behaviour. 1987;17(2):135–160. Schoeffler [1962] Schoeffler MS. Prediction of some stochastic events: A regret equalization model. Journal of experimental psychology. 1962;64(6):615. Mandel and Dhami [2005] Mandel DR, Dhami MK. “What I did” versus “what I might have done”: Effect of factual versus counterfactual thinking on blame, guilt, and shame in prisoners. Journal of Experimental Social Psychology. 2005;41(6):627–635. Berndsen et al. [2004] Berndsen M, van der Pligt J, Doosje B, Manstead A. Guilt and regret: The determining role of interpersonal and intrapersonal harm. Cognition and Emotion. 2004;18(1):55–70. Zeelenberg and Breugelmans [2008] Zeelenberg M, Breugelmans SM. The role of interpersonal harm in distinguishing regret from guilt. Emotion. 2008;8(5):589. Higgins [1987] Higgins ET. Self-discrepancy: a theory relating self and affect. Psychological review. 1987;94(3):319. Davidai and Gilovich [2018] Davidai S, Gilovich T. The ideal road not taken: The self-discrepancies involved in people’s most enduring regrets. Emotion. 2018;18(3):439. Zhang et al. [2021] Zhang X, Zeelenberg M, Summerville A, Breugelmans SM. The role of self-discrepancies in distinguishing regret from guilt. Self and Identity. 2021;20(3):388–405. Hoerl and McCormack [2016] Hoerl C, McCormack T. Making decisions about the future. Seeing the future: Theoretical perspectives on future-oriented mental time travel. 2016;p. 241. Lickel et al. [2014] Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Schoeffler MS. Prediction of some stochastic events: A regret equalization model. Journal of experimental psychology. 1962;64(6):615. Mandel and Dhami [2005] Mandel DR, Dhami MK. “What I did” versus “what I might have done”: Effect of factual versus counterfactual thinking on blame, guilt, and shame in prisoners. Journal of Experimental Social Psychology. 2005;41(6):627–635. Berndsen et al. [2004] Berndsen M, van der Pligt J, Doosje B, Manstead A. Guilt and regret: The determining role of interpersonal and intrapersonal harm. Cognition and Emotion. 2004;18(1):55–70. Zeelenberg and Breugelmans [2008] Zeelenberg M, Breugelmans SM. The role of interpersonal harm in distinguishing regret from guilt. Emotion. 2008;8(5):589. Higgins [1987] Higgins ET. Self-discrepancy: a theory relating self and affect. Psychological review. 1987;94(3):319. Davidai and Gilovich [2018] Davidai S, Gilovich T. The ideal road not taken: The self-discrepancies involved in people’s most enduring regrets. Emotion. 2018;18(3):439. Zhang et al. [2021] Zhang X, Zeelenberg M, Summerville A, Breugelmans SM. The role of self-discrepancies in distinguishing regret from guilt. Self and Identity. 2021;20(3):388–405. Hoerl and McCormack [2016] Hoerl C, McCormack T. Making decisions about the future. Seeing the future: Theoretical perspectives on future-oriented mental time travel. 2016;p. 241. Lickel et al. [2014] Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Mandel DR, Dhami MK. “What I did” versus “what I might have done”: Effect of factual versus counterfactual thinking on blame, guilt, and shame in prisoners. Journal of Experimental Social Psychology. 2005;41(6):627–635. Berndsen et al. [2004] Berndsen M, van der Pligt J, Doosje B, Manstead A. Guilt and regret: The determining role of interpersonal and intrapersonal harm. Cognition and Emotion. 2004;18(1):55–70. Zeelenberg and Breugelmans [2008] Zeelenberg M, Breugelmans SM. The role of interpersonal harm in distinguishing regret from guilt. Emotion. 2008;8(5):589. Higgins [1987] Higgins ET. Self-discrepancy: a theory relating self and affect. Psychological review. 1987;94(3):319. Davidai and Gilovich [2018] Davidai S, Gilovich T. The ideal road not taken: The self-discrepancies involved in people’s most enduring regrets. Emotion. 2018;18(3):439. Zhang et al. [2021] Zhang X, Zeelenberg M, Summerville A, Breugelmans SM. The role of self-discrepancies in distinguishing regret from guilt. Self and Identity. 2021;20(3):388–405. Hoerl and McCormack [2016] Hoerl C, McCormack T. Making decisions about the future. Seeing the future: Theoretical perspectives on future-oriented mental time travel. 2016;p. 241. Lickel et al. [2014] Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Berndsen M, van der Pligt J, Doosje B, Manstead A. Guilt and regret: The determining role of interpersonal and intrapersonal harm. Cognition and Emotion. 2004;18(1):55–70. Zeelenberg and Breugelmans [2008] Zeelenberg M, Breugelmans SM. The role of interpersonal harm in distinguishing regret from guilt. Emotion. 2008;8(5):589. Higgins [1987] Higgins ET. Self-discrepancy: a theory relating self and affect. Psychological review. 1987;94(3):319. Davidai and Gilovich [2018] Davidai S, Gilovich T. The ideal road not taken: The self-discrepancies involved in people’s most enduring regrets. Emotion. 2018;18(3):439. Zhang et al. [2021] Zhang X, Zeelenberg M, Summerville A, Breugelmans SM. The role of self-discrepancies in distinguishing regret from guilt. Self and Identity. 2021;20(3):388–405. Hoerl and McCormack [2016] Hoerl C, McCormack T. Making decisions about the future. Seeing the future: Theoretical perspectives on future-oriented mental time travel. 2016;p. 241. Lickel et al. [2014] Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Zeelenberg M, Breugelmans SM. The role of interpersonal harm in distinguishing regret from guilt. Emotion. 2008;8(5):589. Higgins [1987] Higgins ET. Self-discrepancy: a theory relating self and affect. Psychological review. 1987;94(3):319. Davidai and Gilovich [2018] Davidai S, Gilovich T. The ideal road not taken: The self-discrepancies involved in people’s most enduring regrets. Emotion. 2018;18(3):439. Zhang et al. [2021] Zhang X, Zeelenberg M, Summerville A, Breugelmans SM. The role of self-discrepancies in distinguishing regret from guilt. Self and Identity. 2021;20(3):388–405. Hoerl and McCormack [2016] Hoerl C, McCormack T. Making decisions about the future. Seeing the future: Theoretical perspectives on future-oriented mental time travel. 2016;p. 241. Lickel et al. [2014] Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Higgins ET. Self-discrepancy: a theory relating self and affect. Psychological review. 1987;94(3):319. Davidai and Gilovich [2018] Davidai S, Gilovich T. The ideal road not taken: The self-discrepancies involved in people’s most enduring regrets. Emotion. 2018;18(3):439. Zhang et al. [2021] Zhang X, Zeelenberg M, Summerville A, Breugelmans SM. The role of self-discrepancies in distinguishing regret from guilt. Self and Identity. 2021;20(3):388–405. Hoerl and McCormack [2016] Hoerl C, McCormack T. Making decisions about the future. Seeing the future: Theoretical perspectives on future-oriented mental time travel. 2016;p. 241. Lickel et al. [2014] Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Davidai S, Gilovich T. The ideal road not taken: The self-discrepancies involved in people’s most enduring regrets. Emotion. 2018;18(3):439. Zhang et al. [2021] Zhang X, Zeelenberg M, Summerville A, Breugelmans SM. The role of self-discrepancies in distinguishing regret from guilt. Self and Identity. 2021;20(3):388–405. Hoerl and McCormack [2016] Hoerl C, McCormack T. Making decisions about the future. Seeing the future: Theoretical perspectives on future-oriented mental time travel. 2016;p. 241. Lickel et al. [2014] Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Zhang X, Zeelenberg M, Summerville A, Breugelmans SM. The role of self-discrepancies in distinguishing regret from guilt. Self and Identity. 2021;20(3):388–405. Hoerl and McCormack [2016] Hoerl C, McCormack T. Making decisions about the future. Seeing the future: Theoretical perspectives on future-oriented mental time travel. 2016;p. 241. Lickel et al. [2014] Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Hoerl C, McCormack T. Making decisions about the future. Seeing the future: Theoretical perspectives on future-oriented mental time travel. 2016;p. 241. Lickel et al. [2014] Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly.
- Schoeffler MS. Prediction of some stochastic events: A regret equalization model. Journal of experimental psychology. 1962;64(6):615. Mandel and Dhami [2005] Mandel DR, Dhami MK. “What I did” versus “what I might have done”: Effect of factual versus counterfactual thinking on blame, guilt, and shame in prisoners. Journal of Experimental Social Psychology. 2005;41(6):627–635. Berndsen et al. [2004] Berndsen M, van der Pligt J, Doosje B, Manstead A. Guilt and regret: The determining role of interpersonal and intrapersonal harm. Cognition and Emotion. 2004;18(1):55–70. Zeelenberg and Breugelmans [2008] Zeelenberg M, Breugelmans SM. The role of interpersonal harm in distinguishing regret from guilt. Emotion. 2008;8(5):589. Higgins [1987] Higgins ET. Self-discrepancy: a theory relating self and affect. Psychological review. 1987;94(3):319. Davidai and Gilovich [2018] Davidai S, Gilovich T. The ideal road not taken: The self-discrepancies involved in people’s most enduring regrets. Emotion. 2018;18(3):439. Zhang et al. [2021] Zhang X, Zeelenberg M, Summerville A, Breugelmans SM. The role of self-discrepancies in distinguishing regret from guilt. Self and Identity. 2021;20(3):388–405. Hoerl and McCormack [2016] Hoerl C, McCormack T. Making decisions about the future. Seeing the future: Theoretical perspectives on future-oriented mental time travel. 2016;p. 241. Lickel et al. [2014] Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Mandel DR, Dhami MK. “What I did” versus “what I might have done”: Effect of factual versus counterfactual thinking on blame, guilt, and shame in prisoners. Journal of Experimental Social Psychology. 2005;41(6):627–635. Berndsen et al. [2004] Berndsen M, van der Pligt J, Doosje B, Manstead A. Guilt and regret: The determining role of interpersonal and intrapersonal harm. Cognition and Emotion. 2004;18(1):55–70. Zeelenberg and Breugelmans [2008] Zeelenberg M, Breugelmans SM. The role of interpersonal harm in distinguishing regret from guilt. Emotion. 2008;8(5):589. Higgins [1987] Higgins ET. Self-discrepancy: a theory relating self and affect. Psychological review. 1987;94(3):319. Davidai and Gilovich [2018] Davidai S, Gilovich T. The ideal road not taken: The self-discrepancies involved in people’s most enduring regrets. Emotion. 2018;18(3):439. Zhang et al. [2021] Zhang X, Zeelenberg M, Summerville A, Breugelmans SM. The role of self-discrepancies in distinguishing regret from guilt. Self and Identity. 2021;20(3):388–405. Hoerl and McCormack [2016] Hoerl C, McCormack T. Making decisions about the future. Seeing the future: Theoretical perspectives on future-oriented mental time travel. 2016;p. 241. Lickel et al. [2014] Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Berndsen M, van der Pligt J, Doosje B, Manstead A. Guilt and regret: The determining role of interpersonal and intrapersonal harm. Cognition and Emotion. 2004;18(1):55–70. Zeelenberg and Breugelmans [2008] Zeelenberg M, Breugelmans SM. The role of interpersonal harm in distinguishing regret from guilt. Emotion. 2008;8(5):589. Higgins [1987] Higgins ET. Self-discrepancy: a theory relating self and affect. Psychological review. 1987;94(3):319. Davidai and Gilovich [2018] Davidai S, Gilovich T. The ideal road not taken: The self-discrepancies involved in people’s most enduring regrets. Emotion. 2018;18(3):439. Zhang et al. [2021] Zhang X, Zeelenberg M, Summerville A, Breugelmans SM. The role of self-discrepancies in distinguishing regret from guilt. Self and Identity. 2021;20(3):388–405. Hoerl and McCormack [2016] Hoerl C, McCormack T. Making decisions about the future. Seeing the future: Theoretical perspectives on future-oriented mental time travel. 2016;p. 241. Lickel et al. [2014] Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Zeelenberg M, Breugelmans SM. The role of interpersonal harm in distinguishing regret from guilt. Emotion. 2008;8(5):589. Higgins [1987] Higgins ET. Self-discrepancy: a theory relating self and affect. Psychological review. 1987;94(3):319. Davidai and Gilovich [2018] Davidai S, Gilovich T. The ideal road not taken: The self-discrepancies involved in people’s most enduring regrets. Emotion. 2018;18(3):439. Zhang et al. [2021] Zhang X, Zeelenberg M, Summerville A, Breugelmans SM. The role of self-discrepancies in distinguishing regret from guilt. Self and Identity. 2021;20(3):388–405. Hoerl and McCormack [2016] Hoerl C, McCormack T. Making decisions about the future. Seeing the future: Theoretical perspectives on future-oriented mental time travel. 2016;p. 241. Lickel et al. [2014] Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Higgins ET. Self-discrepancy: a theory relating self and affect. Psychological review. 1987;94(3):319. Davidai and Gilovich [2018] Davidai S, Gilovich T. The ideal road not taken: The self-discrepancies involved in people’s most enduring regrets. Emotion. 2018;18(3):439. Zhang et al. [2021] Zhang X, Zeelenberg M, Summerville A, Breugelmans SM. The role of self-discrepancies in distinguishing regret from guilt. Self and Identity. 2021;20(3):388–405. Hoerl and McCormack [2016] Hoerl C, McCormack T. Making decisions about the future. Seeing the future: Theoretical perspectives on future-oriented mental time travel. 2016;p. 241. Lickel et al. [2014] Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Davidai S, Gilovich T. The ideal road not taken: The self-discrepancies involved in people’s most enduring regrets. Emotion. 2018;18(3):439. Zhang et al. [2021] Zhang X, Zeelenberg M, Summerville A, Breugelmans SM. The role of self-discrepancies in distinguishing regret from guilt. Self and Identity. 2021;20(3):388–405. Hoerl and McCormack [2016] Hoerl C, McCormack T. Making decisions about the future. Seeing the future: Theoretical perspectives on future-oriented mental time travel. 2016;p. 241. Lickel et al. [2014] Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Zhang X, Zeelenberg M, Summerville A, Breugelmans SM. The role of self-discrepancies in distinguishing regret from guilt. Self and Identity. 2021;20(3):388–405. Hoerl and McCormack [2016] Hoerl C, McCormack T. Making decisions about the future. Seeing the future: Theoretical perspectives on future-oriented mental time travel. 2016;p. 241. Lickel et al. [2014] Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Hoerl C, McCormack T. Making decisions about the future. Seeing the future: Theoretical perspectives on future-oriented mental time travel. 2016;p. 241. Lickel et al. [2014] Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly.
- Mandel DR, Dhami MK. “What I did” versus “what I might have done”: Effect of factual versus counterfactual thinking on blame, guilt, and shame in prisoners. Journal of Experimental Social Psychology. 2005;41(6):627–635. Berndsen et al. [2004] Berndsen M, van der Pligt J, Doosje B, Manstead A. Guilt and regret: The determining role of interpersonal and intrapersonal harm. Cognition and Emotion. 2004;18(1):55–70. Zeelenberg and Breugelmans [2008] Zeelenberg M, Breugelmans SM. The role of interpersonal harm in distinguishing regret from guilt. Emotion. 2008;8(5):589. Higgins [1987] Higgins ET. Self-discrepancy: a theory relating self and affect. Psychological review. 1987;94(3):319. Davidai and Gilovich [2018] Davidai S, Gilovich T. The ideal road not taken: The self-discrepancies involved in people’s most enduring regrets. Emotion. 2018;18(3):439. Zhang et al. [2021] Zhang X, Zeelenberg M, Summerville A, Breugelmans SM. The role of self-discrepancies in distinguishing regret from guilt. Self and Identity. 2021;20(3):388–405. Hoerl and McCormack [2016] Hoerl C, McCormack T. Making decisions about the future. Seeing the future: Theoretical perspectives on future-oriented mental time travel. 2016;p. 241. Lickel et al. [2014] Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Berndsen M, van der Pligt J, Doosje B, Manstead A. Guilt and regret: The determining role of interpersonal and intrapersonal harm. Cognition and Emotion. 2004;18(1):55–70. Zeelenberg and Breugelmans [2008] Zeelenberg M, Breugelmans SM. The role of interpersonal harm in distinguishing regret from guilt. Emotion. 2008;8(5):589. Higgins [1987] Higgins ET. Self-discrepancy: a theory relating self and affect. Psychological review. 1987;94(3):319. Davidai and Gilovich [2018] Davidai S, Gilovich T. The ideal road not taken: The self-discrepancies involved in people’s most enduring regrets. Emotion. 2018;18(3):439. Zhang et al. [2021] Zhang X, Zeelenberg M, Summerville A, Breugelmans SM. The role of self-discrepancies in distinguishing regret from guilt. Self and Identity. 2021;20(3):388–405. Hoerl and McCormack [2016] Hoerl C, McCormack T. Making decisions about the future. Seeing the future: Theoretical perspectives on future-oriented mental time travel. 2016;p. 241. Lickel et al. [2014] Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Zeelenberg M, Breugelmans SM. The role of interpersonal harm in distinguishing regret from guilt. Emotion. 2008;8(5):589. Higgins [1987] Higgins ET. Self-discrepancy: a theory relating self and affect. Psychological review. 1987;94(3):319. Davidai and Gilovich [2018] Davidai S, Gilovich T. The ideal road not taken: The self-discrepancies involved in people’s most enduring regrets. Emotion. 2018;18(3):439. Zhang et al. [2021] Zhang X, Zeelenberg M, Summerville A, Breugelmans SM. The role of self-discrepancies in distinguishing regret from guilt. Self and Identity. 2021;20(3):388–405. Hoerl and McCormack [2016] Hoerl C, McCormack T. Making decisions about the future. Seeing the future: Theoretical perspectives on future-oriented mental time travel. 2016;p. 241. Lickel et al. [2014] Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Higgins ET. Self-discrepancy: a theory relating self and affect. Psychological review. 1987;94(3):319. Davidai and Gilovich [2018] Davidai S, Gilovich T. The ideal road not taken: The self-discrepancies involved in people’s most enduring regrets. Emotion. 2018;18(3):439. Zhang et al. [2021] Zhang X, Zeelenberg M, Summerville A, Breugelmans SM. The role of self-discrepancies in distinguishing regret from guilt. Self and Identity. 2021;20(3):388–405. Hoerl and McCormack [2016] Hoerl C, McCormack T. Making decisions about the future. Seeing the future: Theoretical perspectives on future-oriented mental time travel. 2016;p. 241. Lickel et al. [2014] Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Davidai S, Gilovich T. The ideal road not taken: The self-discrepancies involved in people’s most enduring regrets. Emotion. 2018;18(3):439. Zhang et al. [2021] Zhang X, Zeelenberg M, Summerville A, Breugelmans SM. The role of self-discrepancies in distinguishing regret from guilt. Self and Identity. 2021;20(3):388–405. Hoerl and McCormack [2016] Hoerl C, McCormack T. Making decisions about the future. Seeing the future: Theoretical perspectives on future-oriented mental time travel. 2016;p. 241. Lickel et al. [2014] Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Zhang X, Zeelenberg M, Summerville A, Breugelmans SM. The role of self-discrepancies in distinguishing regret from guilt. Self and Identity. 2021;20(3):388–405. Hoerl and McCormack [2016] Hoerl C, McCormack T. Making decisions about the future. Seeing the future: Theoretical perspectives on future-oriented mental time travel. 2016;p. 241. Lickel et al. [2014] Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Hoerl C, McCormack T. Making decisions about the future. Seeing the future: Theoretical perspectives on future-oriented mental time travel. 2016;p. 241. Lickel et al. [2014] Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly.
- Guilt and regret: The determining role of interpersonal and intrapersonal harm. Cognition and Emotion. 2004;18(1):55–70. Zeelenberg and Breugelmans [2008] Zeelenberg M, Breugelmans SM. The role of interpersonal harm in distinguishing regret from guilt. Emotion. 2008;8(5):589. Higgins [1987] Higgins ET. Self-discrepancy: a theory relating self and affect. Psychological review. 1987;94(3):319. Davidai and Gilovich [2018] Davidai S, Gilovich T. The ideal road not taken: The self-discrepancies involved in people’s most enduring regrets. Emotion. 2018;18(3):439. Zhang et al. [2021] Zhang X, Zeelenberg M, Summerville A, Breugelmans SM. The role of self-discrepancies in distinguishing regret from guilt. Self and Identity. 2021;20(3):388–405. Hoerl and McCormack [2016] Hoerl C, McCormack T. Making decisions about the future. Seeing the future: Theoretical perspectives on future-oriented mental time travel. 2016;p. 241. Lickel et al. [2014] Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Zeelenberg M, Breugelmans SM. The role of interpersonal harm in distinguishing regret from guilt. Emotion. 2008;8(5):589. Higgins [1987] Higgins ET. Self-discrepancy: a theory relating self and affect. Psychological review. 1987;94(3):319. Davidai and Gilovich [2018] Davidai S, Gilovich T. The ideal road not taken: The self-discrepancies involved in people’s most enduring regrets. Emotion. 2018;18(3):439. Zhang et al. [2021] Zhang X, Zeelenberg M, Summerville A, Breugelmans SM. The role of self-discrepancies in distinguishing regret from guilt. Self and Identity. 2021;20(3):388–405. Hoerl and McCormack [2016] Hoerl C, McCormack T. Making decisions about the future. Seeing the future: Theoretical perspectives on future-oriented mental time travel. 2016;p. 241. Lickel et al. [2014] Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Higgins ET. Self-discrepancy: a theory relating self and affect. Psychological review. 1987;94(3):319. Davidai and Gilovich [2018] Davidai S, Gilovich T. The ideal road not taken: The self-discrepancies involved in people’s most enduring regrets. Emotion. 2018;18(3):439. Zhang et al. [2021] Zhang X, Zeelenberg M, Summerville A, Breugelmans SM. The role of self-discrepancies in distinguishing regret from guilt. Self and Identity. 2021;20(3):388–405. Hoerl and McCormack [2016] Hoerl C, McCormack T. Making decisions about the future. Seeing the future: Theoretical perspectives on future-oriented mental time travel. 2016;p. 241. Lickel et al. [2014] Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Davidai S, Gilovich T. The ideal road not taken: The self-discrepancies involved in people’s most enduring regrets. Emotion. 2018;18(3):439. Zhang et al. [2021] Zhang X, Zeelenberg M, Summerville A, Breugelmans SM. The role of self-discrepancies in distinguishing regret from guilt. Self and Identity. 2021;20(3):388–405. Hoerl and McCormack [2016] Hoerl C, McCormack T. Making decisions about the future. Seeing the future: Theoretical perspectives on future-oriented mental time travel. 2016;p. 241. Lickel et al. [2014] Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Zhang X, Zeelenberg M, Summerville A, Breugelmans SM. The role of self-discrepancies in distinguishing regret from guilt. Self and Identity. 2021;20(3):388–405. Hoerl and McCormack [2016] Hoerl C, McCormack T. Making decisions about the future. Seeing the future: Theoretical perspectives on future-oriented mental time travel. 2016;p. 241. Lickel et al. [2014] Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Hoerl C, McCormack T. Making decisions about the future. Seeing the future: Theoretical perspectives on future-oriented mental time travel. 2016;p. 241. Lickel et al. [2014] Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly.
- Zeelenberg M, Breugelmans SM. The role of interpersonal harm in distinguishing regret from guilt. Emotion. 2008;8(5):589. Higgins [1987] Higgins ET. Self-discrepancy: a theory relating self and affect. Psychological review. 1987;94(3):319. Davidai and Gilovich [2018] Davidai S, Gilovich T. The ideal road not taken: The self-discrepancies involved in people’s most enduring regrets. Emotion. 2018;18(3):439. Zhang et al. [2021] Zhang X, Zeelenberg M, Summerville A, Breugelmans SM. The role of self-discrepancies in distinguishing regret from guilt. Self and Identity. 2021;20(3):388–405. Hoerl and McCormack [2016] Hoerl C, McCormack T. Making decisions about the future. Seeing the future: Theoretical perspectives on future-oriented mental time travel. 2016;p. 241. Lickel et al. [2014] Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Higgins ET. Self-discrepancy: a theory relating self and affect. Psychological review. 1987;94(3):319. Davidai and Gilovich [2018] Davidai S, Gilovich T. The ideal road not taken: The self-discrepancies involved in people’s most enduring regrets. Emotion. 2018;18(3):439. Zhang et al. [2021] Zhang X, Zeelenberg M, Summerville A, Breugelmans SM. The role of self-discrepancies in distinguishing regret from guilt. Self and Identity. 2021;20(3):388–405. Hoerl and McCormack [2016] Hoerl C, McCormack T. Making decisions about the future. Seeing the future: Theoretical perspectives on future-oriented mental time travel. 2016;p. 241. Lickel et al. [2014] Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Davidai S, Gilovich T. The ideal road not taken: The self-discrepancies involved in people’s most enduring regrets. Emotion. 2018;18(3):439. Zhang et al. [2021] Zhang X, Zeelenberg M, Summerville A, Breugelmans SM. The role of self-discrepancies in distinguishing regret from guilt. Self and Identity. 2021;20(3):388–405. Hoerl and McCormack [2016] Hoerl C, McCormack T. Making decisions about the future. Seeing the future: Theoretical perspectives on future-oriented mental time travel. 2016;p. 241. Lickel et al. [2014] Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Zhang X, Zeelenberg M, Summerville A, Breugelmans SM. The role of self-discrepancies in distinguishing regret from guilt. Self and Identity. 2021;20(3):388–405. Hoerl and McCormack [2016] Hoerl C, McCormack T. Making decisions about the future. Seeing the future: Theoretical perspectives on future-oriented mental time travel. 2016;p. 241. Lickel et al. [2014] Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Hoerl C, McCormack T. Making decisions about the future. Seeing the future: Theoretical perspectives on future-oriented mental time travel. 2016;p. 241. Lickel et al. [2014] Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly.
- Higgins ET. Self-discrepancy: a theory relating self and affect. Psychological review. 1987;94(3):319. Davidai and Gilovich [2018] Davidai S, Gilovich T. The ideal road not taken: The self-discrepancies involved in people’s most enduring regrets. Emotion. 2018;18(3):439. Zhang et al. [2021] Zhang X, Zeelenberg M, Summerville A, Breugelmans SM. The role of self-discrepancies in distinguishing regret from guilt. Self and Identity. 2021;20(3):388–405. Hoerl and McCormack [2016] Hoerl C, McCormack T. Making decisions about the future. Seeing the future: Theoretical perspectives on future-oriented mental time travel. 2016;p. 241. Lickel et al. [2014] Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Davidai S, Gilovich T. The ideal road not taken: The self-discrepancies involved in people’s most enduring regrets. Emotion. 2018;18(3):439. Zhang et al. [2021] Zhang X, Zeelenberg M, Summerville A, Breugelmans SM. The role of self-discrepancies in distinguishing regret from guilt. Self and Identity. 2021;20(3):388–405. Hoerl and McCormack [2016] Hoerl C, McCormack T. Making decisions about the future. Seeing the future: Theoretical perspectives on future-oriented mental time travel. 2016;p. 241. Lickel et al. [2014] Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Zhang X, Zeelenberg M, Summerville A, Breugelmans SM. The role of self-discrepancies in distinguishing regret from guilt. Self and Identity. 2021;20(3):388–405. Hoerl and McCormack [2016] Hoerl C, McCormack T. Making decisions about the future. Seeing the future: Theoretical perspectives on future-oriented mental time travel. 2016;p. 241. Lickel et al. [2014] Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Hoerl C, McCormack T. Making decisions about the future. Seeing the future: Theoretical perspectives on future-oriented mental time travel. 2016;p. 241. Lickel et al. [2014] Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly.
- Davidai S, Gilovich T. The ideal road not taken: The self-discrepancies involved in people’s most enduring regrets. Emotion. 2018;18(3):439. Zhang et al. [2021] Zhang X, Zeelenberg M, Summerville A, Breugelmans SM. The role of self-discrepancies in distinguishing regret from guilt. Self and Identity. 2021;20(3):388–405. Hoerl and McCormack [2016] Hoerl C, McCormack T. Making decisions about the future. Seeing the future: Theoretical perspectives on future-oriented mental time travel. 2016;p. 241. Lickel et al. [2014] Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Zhang X, Zeelenberg M, Summerville A, Breugelmans SM. The role of self-discrepancies in distinguishing regret from guilt. Self and Identity. 2021;20(3):388–405. Hoerl and McCormack [2016] Hoerl C, McCormack T. Making decisions about the future. Seeing the future: Theoretical perspectives on future-oriented mental time travel. 2016;p. 241. Lickel et al. [2014] Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Hoerl C, McCormack T. Making decisions about the future. Seeing the future: Theoretical perspectives on future-oriented mental time travel. 2016;p. 241. Lickel et al. [2014] Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly.
- The role of self-discrepancies in distinguishing regret from guilt. Self and Identity. 2021;20(3):388–405. Hoerl and McCormack [2016] Hoerl C, McCormack T. Making decisions about the future. Seeing the future: Theoretical perspectives on future-oriented mental time travel. 2016;p. 241. Lickel et al. [2014] Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Hoerl C, McCormack T. Making decisions about the future. Seeing the future: Theoretical perspectives on future-oriented mental time travel. 2016;p. 241. Lickel et al. [2014] Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly.
- Hoerl C, McCormack T. Making decisions about the future. Seeing the future: Theoretical perspectives on future-oriented mental time travel. 2016;p. 241. Lickel et al. [2014] Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lickel B, Kushlev K, Savalei V, Matta S, Schmader T. Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly.
- Shame and the motivation to change the self. Emotion. 2014;14(6):1049. Li et al. [2018] Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Li R, Lin Z, Lin H, Wang W, Meng D. Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly.
- Text emotion analysis: A survey. Journal of Computer Research and Development. 2018;55(1):30–52. Amjad et al. [2022] Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Amjad M, Butt S, Zhila A, Sidorov G, Chanona-Hernandez L, Gelbukh A. Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly.
- Survey of Fake News Datasets and Detection Methods in European and Asian Languages. Acta Polytechnica Hungarica. 2022;19(10):185–204. Ahmad et al. [2020] Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ahmad Z, Jindal R, Ekbal A, Bhattachharyya P. Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly.
- Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Systems with Applications. 2020;139:112851. Balouchzahi et al. [2023] Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Balouchzahi F, Butt S, Sidorov G, Gelbukh A. ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly.
- ReDDIT: Regret detection and domain identification from text. Expert Systems with Applications. 2023;225:120099. Iyyer et al. [2015] Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Iyyer M, Manjunatha V, Boyd-Graber J, Daumé III H. Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly.
- Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers); 2015. p. 1681–1691. Giles et al. [1994] Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Giles CL, Kuhn GM, Williams RJ. Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly.
- Dynamic recurrent neural networks: Theory and applications. IEEE Transactions on Neural Networks. 1994;5(2):153–156. LeCun et al. [1998] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly.
- Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. Hochreiter and Schmidhuber [1997] Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly.
- Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997;9(8):1735–1780. Itti and Koch [2001] Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly.
- Itti L, Koch C. Computational modelling of visual attention. Nature reviews neuroscience. 2001;2(3):194–203. Wang et al. [2022] Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly.
- Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:220311171. 2022;. Wei et al. [2022] Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly.
- Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. 2022;35:24824–24837. Yao et al. [2023] Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yao S, Yu D, Zhao J, Shafran I, Griffiths TL, Cao Y, et al. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly.
- Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:230510601. 2023;. Ghosal et al. [2019] Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly.
- Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:190811540. 2019;. Majumder et al. [2019] Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E. Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly.
- Dialoguernn: An attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33; 2019. p. 6818–6825. Rathnayaka et al. [2019] Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Rathnayaka P, Abeysinghe S, Samarajeewa C, Manchanayake I, Walpola MJ, Nawaratne R, et al. Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly.
- Gated recurrent neural network approach for multilabel emotion detection in microblogs. arXiv preprint arXiv:190707653. 2019;. Sidorov et al. [2023] Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Sidorov G, Balouchzahi F, Butt S, Gelbukh A. Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly.
- Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets. Applied Sciences. 2023;13(6):3983. Butt et al. [2021] Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Butt S, Ashraf N, Siddiqui MHF, Sidorov G, Gelbukh A. Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly.
- Transformer-based extractive social media question answering on TweetQA. Computación y Sistemas. 2021;25(1):23–32. Wei and Zhou [2022] Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly.
- Wei J, Zhou D. Language Models Perform Reasoning via Chain of Thought. Google AI Blog Google Research Online verfügbar unter https://ai googleblog com/2022/05/language-models-perform-reasoning-via html, zuletzt aktualisiert am. 2022;11:2022. Paranjape et al. [2023] Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Paranjape B, Lundberg S, Singh S, Hajishirzi H, Zettlemoyer L, Ribeiro MT. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly.
- ART: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:230309014. 2023;. Yu et al. [2023] Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yu Z, He L, Wu Z, Dai X, Chen J. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly.
- Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:231004959. 2023;. Ekman et al. [1999] Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly.
- Ekman P, et al. Basic emotions. Handbook of cognition and emotion. 1999;98(45-60):16. Plutchik [1984] Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly.
- Plutchik R. Emotions: A general psychoevolutionary theory. Approaches to emotion. 1984;1984(197-219):2–4. Deng and Ren [2021] Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly.
- Deng J, Ren F. A survey of textual emotion recognition and its challenges. IEEE Transactions on Affective Computing. 2021;. Meque et al. [2023] Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Meque AGM, Hussain N, Sidorov G, Gelbukh A. Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly.
- Guilt Detection in Text: A Step Towards Understanding Complex Emotions. arXiv preprint arXiv:230303510. 2023;. Meque et al. [2024] Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Meque AGM, Angel J, Sidorov G, Gelbukh A.: Leveraging the power of transformers for guilt detection in text. Lykousas et al. [2019] Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly.
- Lykousas N, Patsakis C, Kaltenbrunner A, Gómez V. Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly.
- Sharing emotions at scale: The vent dataset. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 13; 2019. p. 611–619. Scherer and Wallbott [1994] Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly.
- Scherer KR, Wallbott HG. Evidence for universality and cultural variation of differential emotion response patterning: Correction. Journal of Personality and Social Psychology. 1994 jul;67(1):55–55. 10.1037/0022-3514.67.1.55. Ghosh et al. [2020] Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Ghosh S, Ekbal A, Bhattacharyya P. Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly.
- Cease, a corpus of emotion annotated suicide notes in English. In: Proceedings of the 12th Language Resources and Evaluation Conference; 2020. p. 1618–1626. Banerjee et al. [1999] Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly.
- Beyond kappa: A review of interrater agreement measures. Canadian journal of statistics. 1999;27(1):3–23. Breiman [2001] Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly.
- Breiman L. Random forests. Machine learning. 2001;45(1):5–32. Freund and Schapire [1997] Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly.
- Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences. 1997;55(1):119–139. Chen and Guestrin [2016] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly.
- Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining; 2016. p. 785–794. Joulin et al. [2016] Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Joulin A, Grave E, Bojanowski P, Mikolov T. FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly.
- FastText.zip: Compressing text classification models. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings; 2016. . Devlin et al. [2018] Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly.
- BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:181004805. 2018;. Liu et al. [2019] Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly.
- RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:190711692. 2019;. Lan et al. [2019] Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly.
- ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:190911942. 2019;. Yang et al. [2019] Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly.
- XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:190608237. 2019;. Sanh et al. [2019] Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly.
- DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:191001108. 2019;. Clark et al. [2020] Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Clark K, Luong MT, Le QV, Manning CD. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly.
- ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. arXiv preprint arXiv:200310555. 2020;. Xian et al. [2020] Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly. Xian Y, Lampert CH, Schiele B, Akata Z.: Zero-Shot Learning – A Comprehensive Evaluation of the Good, the Bad and the Ugly.