Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Quantifying the Uniqueness of Donald Trump in Presidential Discourse (2401.01405v1)

Published 2 Jan 2024 in cs.CL, cs.AI, cs.CY, and cs.SI

Abstract: Does Donald Trump speak differently from other presidents? If so, in what ways? Are these differences confined to any single medium of communication? To investigate these questions, this paper introduces a novel metric of uniqueness based on LLMs, develops a new lexicon for divisive speech, and presents a framework for comparing the lexical features of political opponents. Applying these tools to a variety of corpora of presidential speeches, we find considerable evidence that Trump's speech patterns diverge from those of all major party nominees for the presidency in recent history. Some notable findings include Trump's employment of particularly divisive and antagonistic language targeting of his political opponents and his patterns of repetition for emphasis. Furthermore, Trump is significantly more distinctive than his fellow Republicans, whose uniqueness values are comparably closer to those of the Democrats. These differences hold across a variety of measurement strategies, arise on both the campaign trail and in official presidential addresses, and do not appear to be an artifact of secular time trends.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (25)
  1. Romano, A.: Donald trump doesn’t talk like other presidents. would be he a better president if he did? yahoo! news (2017) Miroff [2016] Miroff, B.: Presidents on Political Ground: Leaders in Action and What They Face. University Press of Kansas, Kansas (2016) Hart [2020] Hart, R.: Trump and Us: What He Says and Why People Listen. Cambridge University Press, Cambridge (2020) Körner et al. [2022] Körner, R., Overbeck, J.R., Körner, E., Schütz, A.: How the linguistic styles of donald trump and joe biden reflect different forms of power. Journal of Language and Social Psychology 41(6), 631–658 (2022) https://doi.org/10.1177/0261927X221085309 Jamieson and Taussig [2017] Jamieson, K.H., Taussig, D.: Disruption, demonization, deliverance, and norm destruction: The rhetorical signature of donald j. trump. Political Science Quarterly 132(4), 619–50 (2017) [7] Ntontis, E., Jurstakova, K., Neville, F., Haslam, S.A., Reicher, S.: A warrant for violence? an analysis of donald trump’s speech before the us capitol attack. British Journal of Social Psychology n/a(n/a) https://doi.org/10.1111/bjso.12679 Cinar et al. [2020] Cinar, I., Stokes, S., Uribe, A.: Presidential rhetoric and populism. Presidential Studies Quarterly 50(2), 240–264 (2020) Jordan et al. [2019] Jordan, K.N., Sterling, J., Pennebaker, J.W., Boyd, R.L.: Examining long-term trends in politics and culture through language of political leaders and cultural institutions. Proceedings of the National Academy of Sciences 116(9), 3476–3481 (2019) https://doi.org/10.1073/pnas.1811987116 https://www.pnas.org/doi/pdf/10.1073/pnas.1811987116 Zhong [2016] Zhong, W.: The candidates in their own words: A textual analysis of 2016 president primary debates. (2016). https://www.aei.org/wp-content/uploads/2016/04/The-candidates-in-their-own-words.pdf Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training. OpenAI (2018) Kayam [2018] Kayam, O.: The readability and simplicity of donald trump’s language. Political Studies Review 16(1), 73–88 (2018) https://doi.org/10.1177/1478929917706844 https://doi.org/10.1177/1478929917706844 Frischling [2019] Frischling, B.: Not “stable genius” again, or please stop making US run this analysis (2019). https://blog.factba.se/2019/10/03/not-stable-genius-again-or-please-stop-making-us-run-this-analysis/ Woolley and Peters [1999] Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Miroff, B.: Presidents on Political Ground: Leaders in Action and What They Face. University Press of Kansas, Kansas (2016) Hart [2020] Hart, R.: Trump and Us: What He Says and Why People Listen. Cambridge University Press, Cambridge (2020) Körner et al. [2022] Körner, R., Overbeck, J.R., Körner, E., Schütz, A.: How the linguistic styles of donald trump and joe biden reflect different forms of power. Journal of Language and Social Psychology 41(6), 631–658 (2022) https://doi.org/10.1177/0261927X221085309 Jamieson and Taussig [2017] Jamieson, K.H., Taussig, D.: Disruption, demonization, deliverance, and norm destruction: The rhetorical signature of donald j. trump. Political Science Quarterly 132(4), 619–50 (2017) [7] Ntontis, E., Jurstakova, K., Neville, F., Haslam, S.A., Reicher, S.: A warrant for violence? an analysis of donald trump’s speech before the us capitol attack. British Journal of Social Psychology n/a(n/a) https://doi.org/10.1111/bjso.12679 Cinar et al. [2020] Cinar, I., Stokes, S., Uribe, A.: Presidential rhetoric and populism. Presidential Studies Quarterly 50(2), 240–264 (2020) Jordan et al. [2019] Jordan, K.N., Sterling, J., Pennebaker, J.W., Boyd, R.L.: Examining long-term trends in politics and culture through language of political leaders and cultural institutions. Proceedings of the National Academy of Sciences 116(9), 3476–3481 (2019) https://doi.org/10.1073/pnas.1811987116 https://www.pnas.org/doi/pdf/10.1073/pnas.1811987116 Zhong [2016] Zhong, W.: The candidates in their own words: A textual analysis of 2016 president primary debates. (2016). https://www.aei.org/wp-content/uploads/2016/04/The-candidates-in-their-own-words.pdf Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training. OpenAI (2018) Kayam [2018] Kayam, O.: The readability and simplicity of donald trump’s language. Political Studies Review 16(1), 73–88 (2018) https://doi.org/10.1177/1478929917706844 https://doi.org/10.1177/1478929917706844 Frischling [2019] Frischling, B.: Not “stable genius” again, or please stop making US run this analysis (2019). https://blog.factba.se/2019/10/03/not-stable-genius-again-or-please-stop-making-us-run-this-analysis/ Woolley and Peters [1999] Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Hart, R.: Trump and Us: What He Says and Why People Listen. Cambridge University Press, Cambridge (2020) Körner et al. [2022] Körner, R., Overbeck, J.R., Körner, E., Schütz, A.: How the linguistic styles of donald trump and joe biden reflect different forms of power. Journal of Language and Social Psychology 41(6), 631–658 (2022) https://doi.org/10.1177/0261927X221085309 Jamieson and Taussig [2017] Jamieson, K.H., Taussig, D.: Disruption, demonization, deliverance, and norm destruction: The rhetorical signature of donald j. trump. Political Science Quarterly 132(4), 619–50 (2017) [7] Ntontis, E., Jurstakova, K., Neville, F., Haslam, S.A., Reicher, S.: A warrant for violence? an analysis of donald trump’s speech before the us capitol attack. British Journal of Social Psychology n/a(n/a) https://doi.org/10.1111/bjso.12679 Cinar et al. [2020] Cinar, I., Stokes, S., Uribe, A.: Presidential rhetoric and populism. Presidential Studies Quarterly 50(2), 240–264 (2020) Jordan et al. [2019] Jordan, K.N., Sterling, J., Pennebaker, J.W., Boyd, R.L.: Examining long-term trends in politics and culture through language of political leaders and cultural institutions. Proceedings of the National Academy of Sciences 116(9), 3476–3481 (2019) https://doi.org/10.1073/pnas.1811987116 https://www.pnas.org/doi/pdf/10.1073/pnas.1811987116 Zhong [2016] Zhong, W.: The candidates in their own words: A textual analysis of 2016 president primary debates. (2016). https://www.aei.org/wp-content/uploads/2016/04/The-candidates-in-their-own-words.pdf Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training. OpenAI (2018) Kayam [2018] Kayam, O.: The readability and simplicity of donald trump’s language. Political Studies Review 16(1), 73–88 (2018) https://doi.org/10.1177/1478929917706844 https://doi.org/10.1177/1478929917706844 Frischling [2019] Frischling, B.: Not “stable genius” again, or please stop making US run this analysis (2019). https://blog.factba.se/2019/10/03/not-stable-genius-again-or-please-stop-making-us-run-this-analysis/ Woolley and Peters [1999] Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Körner, R., Overbeck, J.R., Körner, E., Schütz, A.: How the linguistic styles of donald trump and joe biden reflect different forms of power. Journal of Language and Social Psychology 41(6), 631–658 (2022) https://doi.org/10.1177/0261927X221085309 Jamieson and Taussig [2017] Jamieson, K.H., Taussig, D.: Disruption, demonization, deliverance, and norm destruction: The rhetorical signature of donald j. trump. Political Science Quarterly 132(4), 619–50 (2017) [7] Ntontis, E., Jurstakova, K., Neville, F., Haslam, S.A., Reicher, S.: A warrant for violence? an analysis of donald trump’s speech before the us capitol attack. British Journal of Social Psychology n/a(n/a) https://doi.org/10.1111/bjso.12679 Cinar et al. [2020] Cinar, I., Stokes, S., Uribe, A.: Presidential rhetoric and populism. Presidential Studies Quarterly 50(2), 240–264 (2020) Jordan et al. [2019] Jordan, K.N., Sterling, J., Pennebaker, J.W., Boyd, R.L.: Examining long-term trends in politics and culture through language of political leaders and cultural institutions. Proceedings of the National Academy of Sciences 116(9), 3476–3481 (2019) https://doi.org/10.1073/pnas.1811987116 https://www.pnas.org/doi/pdf/10.1073/pnas.1811987116 Zhong [2016] Zhong, W.: The candidates in their own words: A textual analysis of 2016 president primary debates. (2016). https://www.aei.org/wp-content/uploads/2016/04/The-candidates-in-their-own-words.pdf Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training. OpenAI (2018) Kayam [2018] Kayam, O.: The readability and simplicity of donald trump’s language. Political Studies Review 16(1), 73–88 (2018) https://doi.org/10.1177/1478929917706844 https://doi.org/10.1177/1478929917706844 Frischling [2019] Frischling, B.: Not “stable genius” again, or please stop making US run this analysis (2019). https://blog.factba.se/2019/10/03/not-stable-genius-again-or-please-stop-making-us-run-this-analysis/ Woolley and Peters [1999] Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Jamieson, K.H., Taussig, D.: Disruption, demonization, deliverance, and norm destruction: The rhetorical signature of donald j. trump. Political Science Quarterly 132(4), 619–50 (2017) [7] Ntontis, E., Jurstakova, K., Neville, F., Haslam, S.A., Reicher, S.: A warrant for violence? an analysis of donald trump’s speech before the us capitol attack. British Journal of Social Psychology n/a(n/a) https://doi.org/10.1111/bjso.12679 Cinar et al. [2020] Cinar, I., Stokes, S., Uribe, A.: Presidential rhetoric and populism. Presidential Studies Quarterly 50(2), 240–264 (2020) Jordan et al. [2019] Jordan, K.N., Sterling, J., Pennebaker, J.W., Boyd, R.L.: Examining long-term trends in politics and culture through language of political leaders and cultural institutions. Proceedings of the National Academy of Sciences 116(9), 3476–3481 (2019) https://doi.org/10.1073/pnas.1811987116 https://www.pnas.org/doi/pdf/10.1073/pnas.1811987116 Zhong [2016] Zhong, W.: The candidates in their own words: A textual analysis of 2016 president primary debates. (2016). https://www.aei.org/wp-content/uploads/2016/04/The-candidates-in-their-own-words.pdf Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training. OpenAI (2018) Kayam [2018] Kayam, O.: The readability and simplicity of donald trump’s language. Political Studies Review 16(1), 73–88 (2018) https://doi.org/10.1177/1478929917706844 https://doi.org/10.1177/1478929917706844 Frischling [2019] Frischling, B.: Not “stable genius” again, or please stop making US run this analysis (2019). https://blog.factba.se/2019/10/03/not-stable-genius-again-or-please-stop-making-us-run-this-analysis/ Woolley and Peters [1999] Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Ntontis, E., Jurstakova, K., Neville, F., Haslam, S.A., Reicher, S.: A warrant for violence? an analysis of donald trump’s speech before the us capitol attack. British Journal of Social Psychology n/a(n/a) https://doi.org/10.1111/bjso.12679 Cinar et al. [2020] Cinar, I., Stokes, S., Uribe, A.: Presidential rhetoric and populism. Presidential Studies Quarterly 50(2), 240–264 (2020) Jordan et al. [2019] Jordan, K.N., Sterling, J., Pennebaker, J.W., Boyd, R.L.: Examining long-term trends in politics and culture through language of political leaders and cultural institutions. Proceedings of the National Academy of Sciences 116(9), 3476–3481 (2019) https://doi.org/10.1073/pnas.1811987116 https://www.pnas.org/doi/pdf/10.1073/pnas.1811987116 Zhong [2016] Zhong, W.: The candidates in their own words: A textual analysis of 2016 president primary debates. (2016). https://www.aei.org/wp-content/uploads/2016/04/The-candidates-in-their-own-words.pdf Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training. OpenAI (2018) Kayam [2018] Kayam, O.: The readability and simplicity of donald trump’s language. Political Studies Review 16(1), 73–88 (2018) https://doi.org/10.1177/1478929917706844 https://doi.org/10.1177/1478929917706844 Frischling [2019] Frischling, B.: Not “stable genius” again, or please stop making US run this analysis (2019). https://blog.factba.se/2019/10/03/not-stable-genius-again-or-please-stop-making-us-run-this-analysis/ Woolley and Peters [1999] Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Cinar, I., Stokes, S., Uribe, A.: Presidential rhetoric and populism. Presidential Studies Quarterly 50(2), 240–264 (2020) Jordan et al. [2019] Jordan, K.N., Sterling, J., Pennebaker, J.W., Boyd, R.L.: Examining long-term trends in politics and culture through language of political leaders and cultural institutions. Proceedings of the National Academy of Sciences 116(9), 3476–3481 (2019) https://doi.org/10.1073/pnas.1811987116 https://www.pnas.org/doi/pdf/10.1073/pnas.1811987116 Zhong [2016] Zhong, W.: The candidates in their own words: A textual analysis of 2016 president primary debates. (2016). https://www.aei.org/wp-content/uploads/2016/04/The-candidates-in-their-own-words.pdf Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training. OpenAI (2018) Kayam [2018] Kayam, O.: The readability and simplicity of donald trump’s language. Political Studies Review 16(1), 73–88 (2018) https://doi.org/10.1177/1478929917706844 https://doi.org/10.1177/1478929917706844 Frischling [2019] Frischling, B.: Not “stable genius” again, or please stop making US run this analysis (2019). https://blog.factba.se/2019/10/03/not-stable-genius-again-or-please-stop-making-us-run-this-analysis/ Woolley and Peters [1999] Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Jordan, K.N., Sterling, J., Pennebaker, J.W., Boyd, R.L.: Examining long-term trends in politics and culture through language of political leaders and cultural institutions. Proceedings of the National Academy of Sciences 116(9), 3476–3481 (2019) https://doi.org/10.1073/pnas.1811987116 https://www.pnas.org/doi/pdf/10.1073/pnas.1811987116 Zhong [2016] Zhong, W.: The candidates in their own words: A textual analysis of 2016 president primary debates. (2016). https://www.aei.org/wp-content/uploads/2016/04/The-candidates-in-their-own-words.pdf Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training. OpenAI (2018) Kayam [2018] Kayam, O.: The readability and simplicity of donald trump’s language. Political Studies Review 16(1), 73–88 (2018) https://doi.org/10.1177/1478929917706844 https://doi.org/10.1177/1478929917706844 Frischling [2019] Frischling, B.: Not “stable genius” again, or please stop making US run this analysis (2019). https://blog.factba.se/2019/10/03/not-stable-genius-again-or-please-stop-making-us-run-this-analysis/ Woolley and Peters [1999] Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Zhong, W.: The candidates in their own words: A textual analysis of 2016 president primary debates. (2016). https://www.aei.org/wp-content/uploads/2016/04/The-candidates-in-their-own-words.pdf Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training. OpenAI (2018) Kayam [2018] Kayam, O.: The readability and simplicity of donald trump’s language. Political Studies Review 16(1), 73–88 (2018) https://doi.org/10.1177/1478929917706844 https://doi.org/10.1177/1478929917706844 Frischling [2019] Frischling, B.: Not “stable genius” again, or please stop making US run this analysis (2019). https://blog.factba.se/2019/10/03/not-stable-genius-again-or-please-stop-making-us-run-this-analysis/ Woolley and Peters [1999] Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training. OpenAI (2018) Kayam [2018] Kayam, O.: The readability and simplicity of donald trump’s language. Political Studies Review 16(1), 73–88 (2018) https://doi.org/10.1177/1478929917706844 https://doi.org/10.1177/1478929917706844 Frischling [2019] Frischling, B.: Not “stable genius” again, or please stop making US run this analysis (2019). https://blog.factba.se/2019/10/03/not-stable-genius-again-or-please-stop-making-us-run-this-analysis/ Woolley and Peters [1999] Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Kayam, O.: The readability and simplicity of donald trump’s language. Political Studies Review 16(1), 73–88 (2018) https://doi.org/10.1177/1478929917706844 https://doi.org/10.1177/1478929917706844 Frischling [2019] Frischling, B.: Not “stable genius” again, or please stop making US run this analysis (2019). https://blog.factba.se/2019/10/03/not-stable-genius-again-or-please-stop-making-us-run-this-analysis/ Woolley and Peters [1999] Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Frischling, B.: Not “stable genius” again, or please stop making US run this analysis (2019). https://blog.factba.se/2019/10/03/not-stable-genius-again-or-please-stop-making-us-run-this-analysis/ Woolley and Peters [1999] Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162
  2. Miroff, B.: Presidents on Political Ground: Leaders in Action and What They Face. University Press of Kansas, Kansas (2016) Hart [2020] Hart, R.: Trump and Us: What He Says and Why People Listen. Cambridge University Press, Cambridge (2020) Körner et al. [2022] Körner, R., Overbeck, J.R., Körner, E., Schütz, A.: How the linguistic styles of donald trump and joe biden reflect different forms of power. Journal of Language and Social Psychology 41(6), 631–658 (2022) https://doi.org/10.1177/0261927X221085309 Jamieson and Taussig [2017] Jamieson, K.H., Taussig, D.: Disruption, demonization, deliverance, and norm destruction: The rhetorical signature of donald j. trump. Political Science Quarterly 132(4), 619–50 (2017) [7] Ntontis, E., Jurstakova, K., Neville, F., Haslam, S.A., Reicher, S.: A warrant for violence? an analysis of donald trump’s speech before the us capitol attack. British Journal of Social Psychology n/a(n/a) https://doi.org/10.1111/bjso.12679 Cinar et al. [2020] Cinar, I., Stokes, S., Uribe, A.: Presidential rhetoric and populism. Presidential Studies Quarterly 50(2), 240–264 (2020) Jordan et al. [2019] Jordan, K.N., Sterling, J., Pennebaker, J.W., Boyd, R.L.: Examining long-term trends in politics and culture through language of political leaders and cultural institutions. Proceedings of the National Academy of Sciences 116(9), 3476–3481 (2019) https://doi.org/10.1073/pnas.1811987116 https://www.pnas.org/doi/pdf/10.1073/pnas.1811987116 Zhong [2016] Zhong, W.: The candidates in their own words: A textual analysis of 2016 president primary debates. (2016). https://www.aei.org/wp-content/uploads/2016/04/The-candidates-in-their-own-words.pdf Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training. OpenAI (2018) Kayam [2018] Kayam, O.: The readability and simplicity of donald trump’s language. Political Studies Review 16(1), 73–88 (2018) https://doi.org/10.1177/1478929917706844 https://doi.org/10.1177/1478929917706844 Frischling [2019] Frischling, B.: Not “stable genius” again, or please stop making US run this analysis (2019). https://blog.factba.se/2019/10/03/not-stable-genius-again-or-please-stop-making-us-run-this-analysis/ Woolley and Peters [1999] Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Hart, R.: Trump and Us: What He Says and Why People Listen. Cambridge University Press, Cambridge (2020) Körner et al. [2022] Körner, R., Overbeck, J.R., Körner, E., Schütz, A.: How the linguistic styles of donald trump and joe biden reflect different forms of power. Journal of Language and Social Psychology 41(6), 631–658 (2022) https://doi.org/10.1177/0261927X221085309 Jamieson and Taussig [2017] Jamieson, K.H., Taussig, D.: Disruption, demonization, deliverance, and norm destruction: The rhetorical signature of donald j. trump. Political Science Quarterly 132(4), 619–50 (2017) [7] Ntontis, E., Jurstakova, K., Neville, F., Haslam, S.A., Reicher, S.: A warrant for violence? an analysis of donald trump’s speech before the us capitol attack. British Journal of Social Psychology n/a(n/a) https://doi.org/10.1111/bjso.12679 Cinar et al. [2020] Cinar, I., Stokes, S., Uribe, A.: Presidential rhetoric and populism. Presidential Studies Quarterly 50(2), 240–264 (2020) Jordan et al. [2019] Jordan, K.N., Sterling, J., Pennebaker, J.W., Boyd, R.L.: Examining long-term trends in politics and culture through language of political leaders and cultural institutions. Proceedings of the National Academy of Sciences 116(9), 3476–3481 (2019) https://doi.org/10.1073/pnas.1811987116 https://www.pnas.org/doi/pdf/10.1073/pnas.1811987116 Zhong [2016] Zhong, W.: The candidates in their own words: A textual analysis of 2016 president primary debates. (2016). https://www.aei.org/wp-content/uploads/2016/04/The-candidates-in-their-own-words.pdf Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training. OpenAI (2018) Kayam [2018] Kayam, O.: The readability and simplicity of donald trump’s language. Political Studies Review 16(1), 73–88 (2018) https://doi.org/10.1177/1478929917706844 https://doi.org/10.1177/1478929917706844 Frischling [2019] Frischling, B.: Not “stable genius” again, or please stop making US run this analysis (2019). https://blog.factba.se/2019/10/03/not-stable-genius-again-or-please-stop-making-us-run-this-analysis/ Woolley and Peters [1999] Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Körner, R., Overbeck, J.R., Körner, E., Schütz, A.: How the linguistic styles of donald trump and joe biden reflect different forms of power. Journal of Language and Social Psychology 41(6), 631–658 (2022) https://doi.org/10.1177/0261927X221085309 Jamieson and Taussig [2017] Jamieson, K.H., Taussig, D.: Disruption, demonization, deliverance, and norm destruction: The rhetorical signature of donald j. trump. Political Science Quarterly 132(4), 619–50 (2017) [7] Ntontis, E., Jurstakova, K., Neville, F., Haslam, S.A., Reicher, S.: A warrant for violence? an analysis of donald trump’s speech before the us capitol attack. British Journal of Social Psychology n/a(n/a) https://doi.org/10.1111/bjso.12679 Cinar et al. [2020] Cinar, I., Stokes, S., Uribe, A.: Presidential rhetoric and populism. Presidential Studies Quarterly 50(2), 240–264 (2020) Jordan et al. [2019] Jordan, K.N., Sterling, J., Pennebaker, J.W., Boyd, R.L.: Examining long-term trends in politics and culture through language of political leaders and cultural institutions. Proceedings of the National Academy of Sciences 116(9), 3476–3481 (2019) https://doi.org/10.1073/pnas.1811987116 https://www.pnas.org/doi/pdf/10.1073/pnas.1811987116 Zhong [2016] Zhong, W.: The candidates in their own words: A textual analysis of 2016 president primary debates. (2016). https://www.aei.org/wp-content/uploads/2016/04/The-candidates-in-their-own-words.pdf Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training. OpenAI (2018) Kayam [2018] Kayam, O.: The readability and simplicity of donald trump’s language. Political Studies Review 16(1), 73–88 (2018) https://doi.org/10.1177/1478929917706844 https://doi.org/10.1177/1478929917706844 Frischling [2019] Frischling, B.: Not “stable genius” again, or please stop making US run this analysis (2019). https://blog.factba.se/2019/10/03/not-stable-genius-again-or-please-stop-making-us-run-this-analysis/ Woolley and Peters [1999] Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Jamieson, K.H., Taussig, D.: Disruption, demonization, deliverance, and norm destruction: The rhetorical signature of donald j. trump. Political Science Quarterly 132(4), 619–50 (2017) [7] Ntontis, E., Jurstakova, K., Neville, F., Haslam, S.A., Reicher, S.: A warrant for violence? an analysis of donald trump’s speech before the us capitol attack. British Journal of Social Psychology n/a(n/a) https://doi.org/10.1111/bjso.12679 Cinar et al. [2020] Cinar, I., Stokes, S., Uribe, A.: Presidential rhetoric and populism. Presidential Studies Quarterly 50(2), 240–264 (2020) Jordan et al. [2019] Jordan, K.N., Sterling, J., Pennebaker, J.W., Boyd, R.L.: Examining long-term trends in politics and culture through language of political leaders and cultural institutions. Proceedings of the National Academy of Sciences 116(9), 3476–3481 (2019) https://doi.org/10.1073/pnas.1811987116 https://www.pnas.org/doi/pdf/10.1073/pnas.1811987116 Zhong [2016] Zhong, W.: The candidates in their own words: A textual analysis of 2016 president primary debates. (2016). https://www.aei.org/wp-content/uploads/2016/04/The-candidates-in-their-own-words.pdf Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training. OpenAI (2018) Kayam [2018] Kayam, O.: The readability and simplicity of donald trump’s language. Political Studies Review 16(1), 73–88 (2018) https://doi.org/10.1177/1478929917706844 https://doi.org/10.1177/1478929917706844 Frischling [2019] Frischling, B.: Not “stable genius” again, or please stop making US run this analysis (2019). https://blog.factba.se/2019/10/03/not-stable-genius-again-or-please-stop-making-us-run-this-analysis/ Woolley and Peters [1999] Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Ntontis, E., Jurstakova, K., Neville, F., Haslam, S.A., Reicher, S.: A warrant for violence? an analysis of donald trump’s speech before the us capitol attack. British Journal of Social Psychology n/a(n/a) https://doi.org/10.1111/bjso.12679 Cinar et al. [2020] Cinar, I., Stokes, S., Uribe, A.: Presidential rhetoric and populism. Presidential Studies Quarterly 50(2), 240–264 (2020) Jordan et al. [2019] Jordan, K.N., Sterling, J., Pennebaker, J.W., Boyd, R.L.: Examining long-term trends in politics and culture through language of political leaders and cultural institutions. Proceedings of the National Academy of Sciences 116(9), 3476–3481 (2019) https://doi.org/10.1073/pnas.1811987116 https://www.pnas.org/doi/pdf/10.1073/pnas.1811987116 Zhong [2016] Zhong, W.: The candidates in their own words: A textual analysis of 2016 president primary debates. (2016). https://www.aei.org/wp-content/uploads/2016/04/The-candidates-in-their-own-words.pdf Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training. OpenAI (2018) Kayam [2018] Kayam, O.: The readability and simplicity of donald trump’s language. Political Studies Review 16(1), 73–88 (2018) https://doi.org/10.1177/1478929917706844 https://doi.org/10.1177/1478929917706844 Frischling [2019] Frischling, B.: Not “stable genius” again, or please stop making US run this analysis (2019). https://blog.factba.se/2019/10/03/not-stable-genius-again-or-please-stop-making-us-run-this-analysis/ Woolley and Peters [1999] Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Cinar, I., Stokes, S., Uribe, A.: Presidential rhetoric and populism. Presidential Studies Quarterly 50(2), 240–264 (2020) Jordan et al. [2019] Jordan, K.N., Sterling, J., Pennebaker, J.W., Boyd, R.L.: Examining long-term trends in politics and culture through language of political leaders and cultural institutions. Proceedings of the National Academy of Sciences 116(9), 3476–3481 (2019) https://doi.org/10.1073/pnas.1811987116 https://www.pnas.org/doi/pdf/10.1073/pnas.1811987116 Zhong [2016] Zhong, W.: The candidates in their own words: A textual analysis of 2016 president primary debates. (2016). https://www.aei.org/wp-content/uploads/2016/04/The-candidates-in-their-own-words.pdf Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training. OpenAI (2018) Kayam [2018] Kayam, O.: The readability and simplicity of donald trump’s language. Political Studies Review 16(1), 73–88 (2018) https://doi.org/10.1177/1478929917706844 https://doi.org/10.1177/1478929917706844 Frischling [2019] Frischling, B.: Not “stable genius” again, or please stop making US run this analysis (2019). https://blog.factba.se/2019/10/03/not-stable-genius-again-or-please-stop-making-us-run-this-analysis/ Woolley and Peters [1999] Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Jordan, K.N., Sterling, J., Pennebaker, J.W., Boyd, R.L.: Examining long-term trends in politics and culture through language of political leaders and cultural institutions. Proceedings of the National Academy of Sciences 116(9), 3476–3481 (2019) https://doi.org/10.1073/pnas.1811987116 https://www.pnas.org/doi/pdf/10.1073/pnas.1811987116 Zhong [2016] Zhong, W.: The candidates in their own words: A textual analysis of 2016 president primary debates. (2016). https://www.aei.org/wp-content/uploads/2016/04/The-candidates-in-their-own-words.pdf Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training. OpenAI (2018) Kayam [2018] Kayam, O.: The readability and simplicity of donald trump’s language. Political Studies Review 16(1), 73–88 (2018) https://doi.org/10.1177/1478929917706844 https://doi.org/10.1177/1478929917706844 Frischling [2019] Frischling, B.: Not “stable genius” again, or please stop making US run this analysis (2019). https://blog.factba.se/2019/10/03/not-stable-genius-again-or-please-stop-making-us-run-this-analysis/ Woolley and Peters [1999] Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Zhong, W.: The candidates in their own words: A textual analysis of 2016 president primary debates. (2016). https://www.aei.org/wp-content/uploads/2016/04/The-candidates-in-their-own-words.pdf Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training. OpenAI (2018) Kayam [2018] Kayam, O.: The readability and simplicity of donald trump’s language. Political Studies Review 16(1), 73–88 (2018) https://doi.org/10.1177/1478929917706844 https://doi.org/10.1177/1478929917706844 Frischling [2019] Frischling, B.: Not “stable genius” again, or please stop making US run this analysis (2019). https://blog.factba.se/2019/10/03/not-stable-genius-again-or-please-stop-making-us-run-this-analysis/ Woolley and Peters [1999] Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training. OpenAI (2018) Kayam [2018] Kayam, O.: The readability and simplicity of donald trump’s language. Political Studies Review 16(1), 73–88 (2018) https://doi.org/10.1177/1478929917706844 https://doi.org/10.1177/1478929917706844 Frischling [2019] Frischling, B.: Not “stable genius” again, or please stop making US run this analysis (2019). https://blog.factba.se/2019/10/03/not-stable-genius-again-or-please-stop-making-us-run-this-analysis/ Woolley and Peters [1999] Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Kayam, O.: The readability and simplicity of donald trump’s language. Political Studies Review 16(1), 73–88 (2018) https://doi.org/10.1177/1478929917706844 https://doi.org/10.1177/1478929917706844 Frischling [2019] Frischling, B.: Not “stable genius” again, or please stop making US run this analysis (2019). https://blog.factba.se/2019/10/03/not-stable-genius-again-or-please-stop-making-us-run-this-analysis/ Woolley and Peters [1999] Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Frischling, B.: Not “stable genius” again, or please stop making US run this analysis (2019). https://blog.factba.se/2019/10/03/not-stable-genius-again-or-please-stop-making-us-run-this-analysis/ Woolley and Peters [1999] Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162
  3. Hart, R.: Trump and Us: What He Says and Why People Listen. Cambridge University Press, Cambridge (2020) Körner et al. [2022] Körner, R., Overbeck, J.R., Körner, E., Schütz, A.: How the linguistic styles of donald trump and joe biden reflect different forms of power. Journal of Language and Social Psychology 41(6), 631–658 (2022) https://doi.org/10.1177/0261927X221085309 Jamieson and Taussig [2017] Jamieson, K.H., Taussig, D.: Disruption, demonization, deliverance, and norm destruction: The rhetorical signature of donald j. trump. Political Science Quarterly 132(4), 619–50 (2017) [7] Ntontis, E., Jurstakova, K., Neville, F., Haslam, S.A., Reicher, S.: A warrant for violence? an analysis of donald trump’s speech before the us capitol attack. British Journal of Social Psychology n/a(n/a) https://doi.org/10.1111/bjso.12679 Cinar et al. [2020] Cinar, I., Stokes, S., Uribe, A.: Presidential rhetoric and populism. Presidential Studies Quarterly 50(2), 240–264 (2020) Jordan et al. [2019] Jordan, K.N., Sterling, J., Pennebaker, J.W., Boyd, R.L.: Examining long-term trends in politics and culture through language of political leaders and cultural institutions. Proceedings of the National Academy of Sciences 116(9), 3476–3481 (2019) https://doi.org/10.1073/pnas.1811987116 https://www.pnas.org/doi/pdf/10.1073/pnas.1811987116 Zhong [2016] Zhong, W.: The candidates in their own words: A textual analysis of 2016 president primary debates. (2016). https://www.aei.org/wp-content/uploads/2016/04/The-candidates-in-their-own-words.pdf Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training. OpenAI (2018) Kayam [2018] Kayam, O.: The readability and simplicity of donald trump’s language. Political Studies Review 16(1), 73–88 (2018) https://doi.org/10.1177/1478929917706844 https://doi.org/10.1177/1478929917706844 Frischling [2019] Frischling, B.: Not “stable genius” again, or please stop making US run this analysis (2019). https://blog.factba.se/2019/10/03/not-stable-genius-again-or-please-stop-making-us-run-this-analysis/ Woolley and Peters [1999] Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Körner, R., Overbeck, J.R., Körner, E., Schütz, A.: How the linguistic styles of donald trump and joe biden reflect different forms of power. Journal of Language and Social Psychology 41(6), 631–658 (2022) https://doi.org/10.1177/0261927X221085309 Jamieson and Taussig [2017] Jamieson, K.H., Taussig, D.: Disruption, demonization, deliverance, and norm destruction: The rhetorical signature of donald j. trump. Political Science Quarterly 132(4), 619–50 (2017) [7] Ntontis, E., Jurstakova, K., Neville, F., Haslam, S.A., Reicher, S.: A warrant for violence? an analysis of donald trump’s speech before the us capitol attack. British Journal of Social Psychology n/a(n/a) https://doi.org/10.1111/bjso.12679 Cinar et al. [2020] Cinar, I., Stokes, S., Uribe, A.: Presidential rhetoric and populism. Presidential Studies Quarterly 50(2), 240–264 (2020) Jordan et al. [2019] Jordan, K.N., Sterling, J., Pennebaker, J.W., Boyd, R.L.: Examining long-term trends in politics and culture through language of political leaders and cultural institutions. Proceedings of the National Academy of Sciences 116(9), 3476–3481 (2019) https://doi.org/10.1073/pnas.1811987116 https://www.pnas.org/doi/pdf/10.1073/pnas.1811987116 Zhong [2016] Zhong, W.: The candidates in their own words: A textual analysis of 2016 president primary debates. (2016). https://www.aei.org/wp-content/uploads/2016/04/The-candidates-in-their-own-words.pdf Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training. OpenAI (2018) Kayam [2018] Kayam, O.: The readability and simplicity of donald trump’s language. Political Studies Review 16(1), 73–88 (2018) https://doi.org/10.1177/1478929917706844 https://doi.org/10.1177/1478929917706844 Frischling [2019] Frischling, B.: Not “stable genius” again, or please stop making US run this analysis (2019). https://blog.factba.se/2019/10/03/not-stable-genius-again-or-please-stop-making-us-run-this-analysis/ Woolley and Peters [1999] Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Jamieson, K.H., Taussig, D.: Disruption, demonization, deliverance, and norm destruction: The rhetorical signature of donald j. trump. Political Science Quarterly 132(4), 619–50 (2017) [7] Ntontis, E., Jurstakova, K., Neville, F., Haslam, S.A., Reicher, S.: A warrant for violence? an analysis of donald trump’s speech before the us capitol attack. British Journal of Social Psychology n/a(n/a) https://doi.org/10.1111/bjso.12679 Cinar et al. [2020] Cinar, I., Stokes, S., Uribe, A.: Presidential rhetoric and populism. Presidential Studies Quarterly 50(2), 240–264 (2020) Jordan et al. [2019] Jordan, K.N., Sterling, J., Pennebaker, J.W., Boyd, R.L.: Examining long-term trends in politics and culture through language of political leaders and cultural institutions. Proceedings of the National Academy of Sciences 116(9), 3476–3481 (2019) https://doi.org/10.1073/pnas.1811987116 https://www.pnas.org/doi/pdf/10.1073/pnas.1811987116 Zhong [2016] Zhong, W.: The candidates in their own words: A textual analysis of 2016 president primary debates. (2016). https://www.aei.org/wp-content/uploads/2016/04/The-candidates-in-their-own-words.pdf Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training. OpenAI (2018) Kayam [2018] Kayam, O.: The readability and simplicity of donald trump’s language. Political Studies Review 16(1), 73–88 (2018) https://doi.org/10.1177/1478929917706844 https://doi.org/10.1177/1478929917706844 Frischling [2019] Frischling, B.: Not “stable genius” again, or please stop making US run this analysis (2019). https://blog.factba.se/2019/10/03/not-stable-genius-again-or-please-stop-making-us-run-this-analysis/ Woolley and Peters [1999] Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Ntontis, E., Jurstakova, K., Neville, F., Haslam, S.A., Reicher, S.: A warrant for violence? an analysis of donald trump’s speech before the us capitol attack. British Journal of Social Psychology n/a(n/a) https://doi.org/10.1111/bjso.12679 Cinar et al. [2020] Cinar, I., Stokes, S., Uribe, A.: Presidential rhetoric and populism. Presidential Studies Quarterly 50(2), 240–264 (2020) Jordan et al. [2019] Jordan, K.N., Sterling, J., Pennebaker, J.W., Boyd, R.L.: Examining long-term trends in politics and culture through language of political leaders and cultural institutions. Proceedings of the National Academy of Sciences 116(9), 3476–3481 (2019) https://doi.org/10.1073/pnas.1811987116 https://www.pnas.org/doi/pdf/10.1073/pnas.1811987116 Zhong [2016] Zhong, W.: The candidates in their own words: A textual analysis of 2016 president primary debates. (2016). https://www.aei.org/wp-content/uploads/2016/04/The-candidates-in-their-own-words.pdf Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training. OpenAI (2018) Kayam [2018] Kayam, O.: The readability and simplicity of donald trump’s language. Political Studies Review 16(1), 73–88 (2018) https://doi.org/10.1177/1478929917706844 https://doi.org/10.1177/1478929917706844 Frischling [2019] Frischling, B.: Not “stable genius” again, or please stop making US run this analysis (2019). https://blog.factba.se/2019/10/03/not-stable-genius-again-or-please-stop-making-us-run-this-analysis/ Woolley and Peters [1999] Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Cinar, I., Stokes, S., Uribe, A.: Presidential rhetoric and populism. Presidential Studies Quarterly 50(2), 240–264 (2020) Jordan et al. [2019] Jordan, K.N., Sterling, J., Pennebaker, J.W., Boyd, R.L.: Examining long-term trends in politics and culture through language of political leaders and cultural institutions. Proceedings of the National Academy of Sciences 116(9), 3476–3481 (2019) https://doi.org/10.1073/pnas.1811987116 https://www.pnas.org/doi/pdf/10.1073/pnas.1811987116 Zhong [2016] Zhong, W.: The candidates in their own words: A textual analysis of 2016 president primary debates. (2016). https://www.aei.org/wp-content/uploads/2016/04/The-candidates-in-their-own-words.pdf Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training. OpenAI (2018) Kayam [2018] Kayam, O.: The readability and simplicity of donald trump’s language. Political Studies Review 16(1), 73–88 (2018) https://doi.org/10.1177/1478929917706844 https://doi.org/10.1177/1478929917706844 Frischling [2019] Frischling, B.: Not “stable genius” again, or please stop making US run this analysis (2019). https://blog.factba.se/2019/10/03/not-stable-genius-again-or-please-stop-making-us-run-this-analysis/ Woolley and Peters [1999] Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Jordan, K.N., Sterling, J., Pennebaker, J.W., Boyd, R.L.: Examining long-term trends in politics and culture through language of political leaders and cultural institutions. Proceedings of the National Academy of Sciences 116(9), 3476–3481 (2019) https://doi.org/10.1073/pnas.1811987116 https://www.pnas.org/doi/pdf/10.1073/pnas.1811987116 Zhong [2016] Zhong, W.: The candidates in their own words: A textual analysis of 2016 president primary debates. (2016). https://www.aei.org/wp-content/uploads/2016/04/The-candidates-in-their-own-words.pdf Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training. OpenAI (2018) Kayam [2018] Kayam, O.: The readability and simplicity of donald trump’s language. Political Studies Review 16(1), 73–88 (2018) https://doi.org/10.1177/1478929917706844 https://doi.org/10.1177/1478929917706844 Frischling [2019] Frischling, B.: Not “stable genius” again, or please stop making US run this analysis (2019). https://blog.factba.se/2019/10/03/not-stable-genius-again-or-please-stop-making-us-run-this-analysis/ Woolley and Peters [1999] Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Zhong, W.: The candidates in their own words: A textual analysis of 2016 president primary debates. (2016). https://www.aei.org/wp-content/uploads/2016/04/The-candidates-in-their-own-words.pdf Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training. OpenAI (2018) Kayam [2018] Kayam, O.: The readability and simplicity of donald trump’s language. Political Studies Review 16(1), 73–88 (2018) https://doi.org/10.1177/1478929917706844 https://doi.org/10.1177/1478929917706844 Frischling [2019] Frischling, B.: Not “stable genius” again, or please stop making US run this analysis (2019). https://blog.factba.se/2019/10/03/not-stable-genius-again-or-please-stop-making-us-run-this-analysis/ Woolley and Peters [1999] Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training. OpenAI (2018) Kayam [2018] Kayam, O.: The readability and simplicity of donald trump’s language. Political Studies Review 16(1), 73–88 (2018) https://doi.org/10.1177/1478929917706844 https://doi.org/10.1177/1478929917706844 Frischling [2019] Frischling, B.: Not “stable genius” again, or please stop making US run this analysis (2019). https://blog.factba.se/2019/10/03/not-stable-genius-again-or-please-stop-making-us-run-this-analysis/ Woolley and Peters [1999] Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Kayam, O.: The readability and simplicity of donald trump’s language. Political Studies Review 16(1), 73–88 (2018) https://doi.org/10.1177/1478929917706844 https://doi.org/10.1177/1478929917706844 Frischling [2019] Frischling, B.: Not “stable genius” again, or please stop making US run this analysis (2019). https://blog.factba.se/2019/10/03/not-stable-genius-again-or-please-stop-making-us-run-this-analysis/ Woolley and Peters [1999] Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Frischling, B.: Not “stable genius” again, or please stop making US run this analysis (2019). https://blog.factba.se/2019/10/03/not-stable-genius-again-or-please-stop-making-us-run-this-analysis/ Woolley and Peters [1999] Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162
  4. Körner, R., Overbeck, J.R., Körner, E., Schütz, A.: How the linguistic styles of donald trump and joe biden reflect different forms of power. Journal of Language and Social Psychology 41(6), 631–658 (2022) https://doi.org/10.1177/0261927X221085309 Jamieson and Taussig [2017] Jamieson, K.H., Taussig, D.: Disruption, demonization, deliverance, and norm destruction: The rhetorical signature of donald j. trump. Political Science Quarterly 132(4), 619–50 (2017) [7] Ntontis, E., Jurstakova, K., Neville, F., Haslam, S.A., Reicher, S.: A warrant for violence? an analysis of donald trump’s speech before the us capitol attack. British Journal of Social Psychology n/a(n/a) https://doi.org/10.1111/bjso.12679 Cinar et al. [2020] Cinar, I., Stokes, S., Uribe, A.: Presidential rhetoric and populism. Presidential Studies Quarterly 50(2), 240–264 (2020) Jordan et al. [2019] Jordan, K.N., Sterling, J., Pennebaker, J.W., Boyd, R.L.: Examining long-term trends in politics and culture through language of political leaders and cultural institutions. Proceedings of the National Academy of Sciences 116(9), 3476–3481 (2019) https://doi.org/10.1073/pnas.1811987116 https://www.pnas.org/doi/pdf/10.1073/pnas.1811987116 Zhong [2016] Zhong, W.: The candidates in their own words: A textual analysis of 2016 president primary debates. (2016). https://www.aei.org/wp-content/uploads/2016/04/The-candidates-in-their-own-words.pdf Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training. OpenAI (2018) Kayam [2018] Kayam, O.: The readability and simplicity of donald trump’s language. Political Studies Review 16(1), 73–88 (2018) https://doi.org/10.1177/1478929917706844 https://doi.org/10.1177/1478929917706844 Frischling [2019] Frischling, B.: Not “stable genius” again, or please stop making US run this analysis (2019). https://blog.factba.se/2019/10/03/not-stable-genius-again-or-please-stop-making-us-run-this-analysis/ Woolley and Peters [1999] Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Jamieson, K.H., Taussig, D.: Disruption, demonization, deliverance, and norm destruction: The rhetorical signature of donald j. trump. Political Science Quarterly 132(4), 619–50 (2017) [7] Ntontis, E., Jurstakova, K., Neville, F., Haslam, S.A., Reicher, S.: A warrant for violence? an analysis of donald trump’s speech before the us capitol attack. British Journal of Social Psychology n/a(n/a) https://doi.org/10.1111/bjso.12679 Cinar et al. [2020] Cinar, I., Stokes, S., Uribe, A.: Presidential rhetoric and populism. Presidential Studies Quarterly 50(2), 240–264 (2020) Jordan et al. [2019] Jordan, K.N., Sterling, J., Pennebaker, J.W., Boyd, R.L.: Examining long-term trends in politics and culture through language of political leaders and cultural institutions. Proceedings of the National Academy of Sciences 116(9), 3476–3481 (2019) https://doi.org/10.1073/pnas.1811987116 https://www.pnas.org/doi/pdf/10.1073/pnas.1811987116 Zhong [2016] Zhong, W.: The candidates in their own words: A textual analysis of 2016 president primary debates. (2016). https://www.aei.org/wp-content/uploads/2016/04/The-candidates-in-their-own-words.pdf Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training. OpenAI (2018) Kayam [2018] Kayam, O.: The readability and simplicity of donald trump’s language. Political Studies Review 16(1), 73–88 (2018) https://doi.org/10.1177/1478929917706844 https://doi.org/10.1177/1478929917706844 Frischling [2019] Frischling, B.: Not “stable genius” again, or please stop making US run this analysis (2019). https://blog.factba.se/2019/10/03/not-stable-genius-again-or-please-stop-making-us-run-this-analysis/ Woolley and Peters [1999] Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Ntontis, E., Jurstakova, K., Neville, F., Haslam, S.A., Reicher, S.: A warrant for violence? an analysis of donald trump’s speech before the us capitol attack. British Journal of Social Psychology n/a(n/a) https://doi.org/10.1111/bjso.12679 Cinar et al. [2020] Cinar, I., Stokes, S., Uribe, A.: Presidential rhetoric and populism. Presidential Studies Quarterly 50(2), 240–264 (2020) Jordan et al. [2019] Jordan, K.N., Sterling, J., Pennebaker, J.W., Boyd, R.L.: Examining long-term trends in politics and culture through language of political leaders and cultural institutions. Proceedings of the National Academy of Sciences 116(9), 3476–3481 (2019) https://doi.org/10.1073/pnas.1811987116 https://www.pnas.org/doi/pdf/10.1073/pnas.1811987116 Zhong [2016] Zhong, W.: The candidates in their own words: A textual analysis of 2016 president primary debates. (2016). https://www.aei.org/wp-content/uploads/2016/04/The-candidates-in-their-own-words.pdf Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training. OpenAI (2018) Kayam [2018] Kayam, O.: The readability and simplicity of donald trump’s language. Political Studies Review 16(1), 73–88 (2018) https://doi.org/10.1177/1478929917706844 https://doi.org/10.1177/1478929917706844 Frischling [2019] Frischling, B.: Not “stable genius” again, or please stop making US run this analysis (2019). https://blog.factba.se/2019/10/03/not-stable-genius-again-or-please-stop-making-us-run-this-analysis/ Woolley and Peters [1999] Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Cinar, I., Stokes, S., Uribe, A.: Presidential rhetoric and populism. Presidential Studies Quarterly 50(2), 240–264 (2020) Jordan et al. [2019] Jordan, K.N., Sterling, J., Pennebaker, J.W., Boyd, R.L.: Examining long-term trends in politics and culture through language of political leaders and cultural institutions. Proceedings of the National Academy of Sciences 116(9), 3476–3481 (2019) https://doi.org/10.1073/pnas.1811987116 https://www.pnas.org/doi/pdf/10.1073/pnas.1811987116 Zhong [2016] Zhong, W.: The candidates in their own words: A textual analysis of 2016 president primary debates. (2016). https://www.aei.org/wp-content/uploads/2016/04/The-candidates-in-their-own-words.pdf Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training. OpenAI (2018) Kayam [2018] Kayam, O.: The readability and simplicity of donald trump’s language. Political Studies Review 16(1), 73–88 (2018) https://doi.org/10.1177/1478929917706844 https://doi.org/10.1177/1478929917706844 Frischling [2019] Frischling, B.: Not “stable genius” again, or please stop making US run this analysis (2019). https://blog.factba.se/2019/10/03/not-stable-genius-again-or-please-stop-making-us-run-this-analysis/ Woolley and Peters [1999] Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Jordan, K.N., Sterling, J., Pennebaker, J.W., Boyd, R.L.: Examining long-term trends in politics and culture through language of political leaders and cultural institutions. Proceedings of the National Academy of Sciences 116(9), 3476–3481 (2019) https://doi.org/10.1073/pnas.1811987116 https://www.pnas.org/doi/pdf/10.1073/pnas.1811987116 Zhong [2016] Zhong, W.: The candidates in their own words: A textual analysis of 2016 president primary debates. (2016). https://www.aei.org/wp-content/uploads/2016/04/The-candidates-in-their-own-words.pdf Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training. OpenAI (2018) Kayam [2018] Kayam, O.: The readability and simplicity of donald trump’s language. Political Studies Review 16(1), 73–88 (2018) https://doi.org/10.1177/1478929917706844 https://doi.org/10.1177/1478929917706844 Frischling [2019] Frischling, B.: Not “stable genius” again, or please stop making US run this analysis (2019). https://blog.factba.se/2019/10/03/not-stable-genius-again-or-please-stop-making-us-run-this-analysis/ Woolley and Peters [1999] Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Zhong, W.: The candidates in their own words: A textual analysis of 2016 president primary debates. (2016). https://www.aei.org/wp-content/uploads/2016/04/The-candidates-in-their-own-words.pdf Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training. OpenAI (2018) Kayam [2018] Kayam, O.: The readability and simplicity of donald trump’s language. Political Studies Review 16(1), 73–88 (2018) https://doi.org/10.1177/1478929917706844 https://doi.org/10.1177/1478929917706844 Frischling [2019] Frischling, B.: Not “stable genius” again, or please stop making US run this analysis (2019). https://blog.factba.se/2019/10/03/not-stable-genius-again-or-please-stop-making-us-run-this-analysis/ Woolley and Peters [1999] Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training. OpenAI (2018) Kayam [2018] Kayam, O.: The readability and simplicity of donald trump’s language. Political Studies Review 16(1), 73–88 (2018) https://doi.org/10.1177/1478929917706844 https://doi.org/10.1177/1478929917706844 Frischling [2019] Frischling, B.: Not “stable genius” again, or please stop making US run this analysis (2019). https://blog.factba.se/2019/10/03/not-stable-genius-again-or-please-stop-making-us-run-this-analysis/ Woolley and Peters [1999] Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Kayam, O.: The readability and simplicity of donald trump’s language. Political Studies Review 16(1), 73–88 (2018) https://doi.org/10.1177/1478929917706844 https://doi.org/10.1177/1478929917706844 Frischling [2019] Frischling, B.: Not “stable genius” again, or please stop making US run this analysis (2019). https://blog.factba.se/2019/10/03/not-stable-genius-again-or-please-stop-making-us-run-this-analysis/ Woolley and Peters [1999] Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Frischling, B.: Not “stable genius” again, or please stop making US run this analysis (2019). https://blog.factba.se/2019/10/03/not-stable-genius-again-or-please-stop-making-us-run-this-analysis/ Woolley and Peters [1999] Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162
  5. Jamieson, K.H., Taussig, D.: Disruption, demonization, deliverance, and norm destruction: The rhetorical signature of donald j. trump. Political Science Quarterly 132(4), 619–50 (2017) [7] Ntontis, E., Jurstakova, K., Neville, F., Haslam, S.A., Reicher, S.: A warrant for violence? an analysis of donald trump’s speech before the us capitol attack. British Journal of Social Psychology n/a(n/a) https://doi.org/10.1111/bjso.12679 Cinar et al. [2020] Cinar, I., Stokes, S., Uribe, A.: Presidential rhetoric and populism. Presidential Studies Quarterly 50(2), 240–264 (2020) Jordan et al. [2019] Jordan, K.N., Sterling, J., Pennebaker, J.W., Boyd, R.L.: Examining long-term trends in politics and culture through language of political leaders and cultural institutions. Proceedings of the National Academy of Sciences 116(9), 3476–3481 (2019) https://doi.org/10.1073/pnas.1811987116 https://www.pnas.org/doi/pdf/10.1073/pnas.1811987116 Zhong [2016] Zhong, W.: The candidates in their own words: A textual analysis of 2016 president primary debates. (2016). https://www.aei.org/wp-content/uploads/2016/04/The-candidates-in-their-own-words.pdf Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training. OpenAI (2018) Kayam [2018] Kayam, O.: The readability and simplicity of donald trump’s language. Political Studies Review 16(1), 73–88 (2018) https://doi.org/10.1177/1478929917706844 https://doi.org/10.1177/1478929917706844 Frischling [2019] Frischling, B.: Not “stable genius” again, or please stop making US run this analysis (2019). https://blog.factba.se/2019/10/03/not-stable-genius-again-or-please-stop-making-us-run-this-analysis/ Woolley and Peters [1999] Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Ntontis, E., Jurstakova, K., Neville, F., Haslam, S.A., Reicher, S.: A warrant for violence? an analysis of donald trump’s speech before the us capitol attack. British Journal of Social Psychology n/a(n/a) https://doi.org/10.1111/bjso.12679 Cinar et al. [2020] Cinar, I., Stokes, S., Uribe, A.: Presidential rhetoric and populism. Presidential Studies Quarterly 50(2), 240–264 (2020) Jordan et al. [2019] Jordan, K.N., Sterling, J., Pennebaker, J.W., Boyd, R.L.: Examining long-term trends in politics and culture through language of political leaders and cultural institutions. Proceedings of the National Academy of Sciences 116(9), 3476–3481 (2019) https://doi.org/10.1073/pnas.1811987116 https://www.pnas.org/doi/pdf/10.1073/pnas.1811987116 Zhong [2016] Zhong, W.: The candidates in their own words: A textual analysis of 2016 president primary debates. (2016). https://www.aei.org/wp-content/uploads/2016/04/The-candidates-in-their-own-words.pdf Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training. OpenAI (2018) Kayam [2018] Kayam, O.: The readability and simplicity of donald trump’s language. Political Studies Review 16(1), 73–88 (2018) https://doi.org/10.1177/1478929917706844 https://doi.org/10.1177/1478929917706844 Frischling [2019] Frischling, B.: Not “stable genius” again, or please stop making US run this analysis (2019). https://blog.factba.se/2019/10/03/not-stable-genius-again-or-please-stop-making-us-run-this-analysis/ Woolley and Peters [1999] Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Cinar, I., Stokes, S., Uribe, A.: Presidential rhetoric and populism. Presidential Studies Quarterly 50(2), 240–264 (2020) Jordan et al. [2019] Jordan, K.N., Sterling, J., Pennebaker, J.W., Boyd, R.L.: Examining long-term trends in politics and culture through language of political leaders and cultural institutions. Proceedings of the National Academy of Sciences 116(9), 3476–3481 (2019) https://doi.org/10.1073/pnas.1811987116 https://www.pnas.org/doi/pdf/10.1073/pnas.1811987116 Zhong [2016] Zhong, W.: The candidates in their own words: A textual analysis of 2016 president primary debates. (2016). https://www.aei.org/wp-content/uploads/2016/04/The-candidates-in-their-own-words.pdf Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training. OpenAI (2018) Kayam [2018] Kayam, O.: The readability and simplicity of donald trump’s language. Political Studies Review 16(1), 73–88 (2018) https://doi.org/10.1177/1478929917706844 https://doi.org/10.1177/1478929917706844 Frischling [2019] Frischling, B.: Not “stable genius” again, or please stop making US run this analysis (2019). https://blog.factba.se/2019/10/03/not-stable-genius-again-or-please-stop-making-us-run-this-analysis/ Woolley and Peters [1999] Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Jordan, K.N., Sterling, J., Pennebaker, J.W., Boyd, R.L.: Examining long-term trends in politics and culture through language of political leaders and cultural institutions. Proceedings of the National Academy of Sciences 116(9), 3476–3481 (2019) https://doi.org/10.1073/pnas.1811987116 https://www.pnas.org/doi/pdf/10.1073/pnas.1811987116 Zhong [2016] Zhong, W.: The candidates in their own words: A textual analysis of 2016 president primary debates. (2016). https://www.aei.org/wp-content/uploads/2016/04/The-candidates-in-their-own-words.pdf Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training. OpenAI (2018) Kayam [2018] Kayam, O.: The readability and simplicity of donald trump’s language. Political Studies Review 16(1), 73–88 (2018) https://doi.org/10.1177/1478929917706844 https://doi.org/10.1177/1478929917706844 Frischling [2019] Frischling, B.: Not “stable genius” again, or please stop making US run this analysis (2019). https://blog.factba.se/2019/10/03/not-stable-genius-again-or-please-stop-making-us-run-this-analysis/ Woolley and Peters [1999] Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Zhong, W.: The candidates in their own words: A textual analysis of 2016 president primary debates. (2016). https://www.aei.org/wp-content/uploads/2016/04/The-candidates-in-their-own-words.pdf Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training. OpenAI (2018) Kayam [2018] Kayam, O.: The readability and simplicity of donald trump’s language. Political Studies Review 16(1), 73–88 (2018) https://doi.org/10.1177/1478929917706844 https://doi.org/10.1177/1478929917706844 Frischling [2019] Frischling, B.: Not “stable genius” again, or please stop making US run this analysis (2019). https://blog.factba.se/2019/10/03/not-stable-genius-again-or-please-stop-making-us-run-this-analysis/ Woolley and Peters [1999] Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training. OpenAI (2018) Kayam [2018] Kayam, O.: The readability and simplicity of donald trump’s language. Political Studies Review 16(1), 73–88 (2018) https://doi.org/10.1177/1478929917706844 https://doi.org/10.1177/1478929917706844 Frischling [2019] Frischling, B.: Not “stable genius” again, or please stop making US run this analysis (2019). https://blog.factba.se/2019/10/03/not-stable-genius-again-or-please-stop-making-us-run-this-analysis/ Woolley and Peters [1999] Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Kayam, O.: The readability and simplicity of donald trump’s language. Political Studies Review 16(1), 73–88 (2018) https://doi.org/10.1177/1478929917706844 https://doi.org/10.1177/1478929917706844 Frischling [2019] Frischling, B.: Not “stable genius” again, or please stop making US run this analysis (2019). https://blog.factba.se/2019/10/03/not-stable-genius-again-or-please-stop-making-us-run-this-analysis/ Woolley and Peters [1999] Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Frischling, B.: Not “stable genius” again, or please stop making US run this analysis (2019). https://blog.factba.se/2019/10/03/not-stable-genius-again-or-please-stop-making-us-run-this-analysis/ Woolley and Peters [1999] Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162
  6. Ntontis, E., Jurstakova, K., Neville, F., Haslam, S.A., Reicher, S.: A warrant for violence? an analysis of donald trump’s speech before the us capitol attack. British Journal of Social Psychology n/a(n/a) https://doi.org/10.1111/bjso.12679 Cinar et al. [2020] Cinar, I., Stokes, S., Uribe, A.: Presidential rhetoric and populism. Presidential Studies Quarterly 50(2), 240–264 (2020) Jordan et al. [2019] Jordan, K.N., Sterling, J., Pennebaker, J.W., Boyd, R.L.: Examining long-term trends in politics and culture through language of political leaders and cultural institutions. Proceedings of the National Academy of Sciences 116(9), 3476–3481 (2019) https://doi.org/10.1073/pnas.1811987116 https://www.pnas.org/doi/pdf/10.1073/pnas.1811987116 Zhong [2016] Zhong, W.: The candidates in their own words: A textual analysis of 2016 president primary debates. (2016). https://www.aei.org/wp-content/uploads/2016/04/The-candidates-in-their-own-words.pdf Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training. OpenAI (2018) Kayam [2018] Kayam, O.: The readability and simplicity of donald trump’s language. Political Studies Review 16(1), 73–88 (2018) https://doi.org/10.1177/1478929917706844 https://doi.org/10.1177/1478929917706844 Frischling [2019] Frischling, B.: Not “stable genius” again, or please stop making US run this analysis (2019). https://blog.factba.se/2019/10/03/not-stable-genius-again-or-please-stop-making-us-run-this-analysis/ Woolley and Peters [1999] Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Cinar, I., Stokes, S., Uribe, A.: Presidential rhetoric and populism. Presidential Studies Quarterly 50(2), 240–264 (2020) Jordan et al. [2019] Jordan, K.N., Sterling, J., Pennebaker, J.W., Boyd, R.L.: Examining long-term trends in politics and culture through language of political leaders and cultural institutions. Proceedings of the National Academy of Sciences 116(9), 3476–3481 (2019) https://doi.org/10.1073/pnas.1811987116 https://www.pnas.org/doi/pdf/10.1073/pnas.1811987116 Zhong [2016] Zhong, W.: The candidates in their own words: A textual analysis of 2016 president primary debates. (2016). https://www.aei.org/wp-content/uploads/2016/04/The-candidates-in-their-own-words.pdf Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training. OpenAI (2018) Kayam [2018] Kayam, O.: The readability and simplicity of donald trump’s language. Political Studies Review 16(1), 73–88 (2018) https://doi.org/10.1177/1478929917706844 https://doi.org/10.1177/1478929917706844 Frischling [2019] Frischling, B.: Not “stable genius” again, or please stop making US run this analysis (2019). https://blog.factba.se/2019/10/03/not-stable-genius-again-or-please-stop-making-us-run-this-analysis/ Woolley and Peters [1999] Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Jordan, K.N., Sterling, J., Pennebaker, J.W., Boyd, R.L.: Examining long-term trends in politics and culture through language of political leaders and cultural institutions. Proceedings of the National Academy of Sciences 116(9), 3476–3481 (2019) https://doi.org/10.1073/pnas.1811987116 https://www.pnas.org/doi/pdf/10.1073/pnas.1811987116 Zhong [2016] Zhong, W.: The candidates in their own words: A textual analysis of 2016 president primary debates. (2016). https://www.aei.org/wp-content/uploads/2016/04/The-candidates-in-their-own-words.pdf Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training. OpenAI (2018) Kayam [2018] Kayam, O.: The readability and simplicity of donald trump’s language. Political Studies Review 16(1), 73–88 (2018) https://doi.org/10.1177/1478929917706844 https://doi.org/10.1177/1478929917706844 Frischling [2019] Frischling, B.: Not “stable genius” again, or please stop making US run this analysis (2019). https://blog.factba.se/2019/10/03/not-stable-genius-again-or-please-stop-making-us-run-this-analysis/ Woolley and Peters [1999] Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Zhong, W.: The candidates in their own words: A textual analysis of 2016 president primary debates. (2016). https://www.aei.org/wp-content/uploads/2016/04/The-candidates-in-their-own-words.pdf Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training. OpenAI (2018) Kayam [2018] Kayam, O.: The readability and simplicity of donald trump’s language. Political Studies Review 16(1), 73–88 (2018) https://doi.org/10.1177/1478929917706844 https://doi.org/10.1177/1478929917706844 Frischling [2019] Frischling, B.: Not “stable genius” again, or please stop making US run this analysis (2019). https://blog.factba.se/2019/10/03/not-stable-genius-again-or-please-stop-making-us-run-this-analysis/ Woolley and Peters [1999] Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training. OpenAI (2018) Kayam [2018] Kayam, O.: The readability and simplicity of donald trump’s language. Political Studies Review 16(1), 73–88 (2018) https://doi.org/10.1177/1478929917706844 https://doi.org/10.1177/1478929917706844 Frischling [2019] Frischling, B.: Not “stable genius” again, or please stop making US run this analysis (2019). https://blog.factba.se/2019/10/03/not-stable-genius-again-or-please-stop-making-us-run-this-analysis/ Woolley and Peters [1999] Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Kayam, O.: The readability and simplicity of donald trump’s language. Political Studies Review 16(1), 73–88 (2018) https://doi.org/10.1177/1478929917706844 https://doi.org/10.1177/1478929917706844 Frischling [2019] Frischling, B.: Not “stable genius” again, or please stop making US run this analysis (2019). https://blog.factba.se/2019/10/03/not-stable-genius-again-or-please-stop-making-us-run-this-analysis/ Woolley and Peters [1999] Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Frischling, B.: Not “stable genius” again, or please stop making US run this analysis (2019). https://blog.factba.se/2019/10/03/not-stable-genius-again-or-please-stop-making-us-run-this-analysis/ Woolley and Peters [1999] Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162
  7. Cinar, I., Stokes, S., Uribe, A.: Presidential rhetoric and populism. Presidential Studies Quarterly 50(2), 240–264 (2020) Jordan et al. [2019] Jordan, K.N., Sterling, J., Pennebaker, J.W., Boyd, R.L.: Examining long-term trends in politics and culture through language of political leaders and cultural institutions. Proceedings of the National Academy of Sciences 116(9), 3476–3481 (2019) https://doi.org/10.1073/pnas.1811987116 https://www.pnas.org/doi/pdf/10.1073/pnas.1811987116 Zhong [2016] Zhong, W.: The candidates in their own words: A textual analysis of 2016 president primary debates. (2016). https://www.aei.org/wp-content/uploads/2016/04/The-candidates-in-their-own-words.pdf Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training. OpenAI (2018) Kayam [2018] Kayam, O.: The readability and simplicity of donald trump’s language. Political Studies Review 16(1), 73–88 (2018) https://doi.org/10.1177/1478929917706844 https://doi.org/10.1177/1478929917706844 Frischling [2019] Frischling, B.: Not “stable genius” again, or please stop making US run this analysis (2019). https://blog.factba.se/2019/10/03/not-stable-genius-again-or-please-stop-making-us-run-this-analysis/ Woolley and Peters [1999] Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Jordan, K.N., Sterling, J., Pennebaker, J.W., Boyd, R.L.: Examining long-term trends in politics and culture through language of political leaders and cultural institutions. Proceedings of the National Academy of Sciences 116(9), 3476–3481 (2019) https://doi.org/10.1073/pnas.1811987116 https://www.pnas.org/doi/pdf/10.1073/pnas.1811987116 Zhong [2016] Zhong, W.: The candidates in their own words: A textual analysis of 2016 president primary debates. (2016). https://www.aei.org/wp-content/uploads/2016/04/The-candidates-in-their-own-words.pdf Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training. OpenAI (2018) Kayam [2018] Kayam, O.: The readability and simplicity of donald trump’s language. Political Studies Review 16(1), 73–88 (2018) https://doi.org/10.1177/1478929917706844 https://doi.org/10.1177/1478929917706844 Frischling [2019] Frischling, B.: Not “stable genius” again, or please stop making US run this analysis (2019). https://blog.factba.se/2019/10/03/not-stable-genius-again-or-please-stop-making-us-run-this-analysis/ Woolley and Peters [1999] Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Zhong, W.: The candidates in their own words: A textual analysis of 2016 president primary debates. (2016). https://www.aei.org/wp-content/uploads/2016/04/The-candidates-in-their-own-words.pdf Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training. OpenAI (2018) Kayam [2018] Kayam, O.: The readability and simplicity of donald trump’s language. Political Studies Review 16(1), 73–88 (2018) https://doi.org/10.1177/1478929917706844 https://doi.org/10.1177/1478929917706844 Frischling [2019] Frischling, B.: Not “stable genius” again, or please stop making US run this analysis (2019). https://blog.factba.se/2019/10/03/not-stable-genius-again-or-please-stop-making-us-run-this-analysis/ Woolley and Peters [1999] Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training. OpenAI (2018) Kayam [2018] Kayam, O.: The readability and simplicity of donald trump’s language. Political Studies Review 16(1), 73–88 (2018) https://doi.org/10.1177/1478929917706844 https://doi.org/10.1177/1478929917706844 Frischling [2019] Frischling, B.: Not “stable genius” again, or please stop making US run this analysis (2019). https://blog.factba.se/2019/10/03/not-stable-genius-again-or-please-stop-making-us-run-this-analysis/ Woolley and Peters [1999] Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Kayam, O.: The readability and simplicity of donald trump’s language. Political Studies Review 16(1), 73–88 (2018) https://doi.org/10.1177/1478929917706844 https://doi.org/10.1177/1478929917706844 Frischling [2019] Frischling, B.: Not “stable genius” again, or please stop making US run this analysis (2019). https://blog.factba.se/2019/10/03/not-stable-genius-again-or-please-stop-making-us-run-this-analysis/ Woolley and Peters [1999] Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Frischling, B.: Not “stable genius” again, or please stop making US run this analysis (2019). https://blog.factba.se/2019/10/03/not-stable-genius-again-or-please-stop-making-us-run-this-analysis/ Woolley and Peters [1999] Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162
  8. Jordan, K.N., Sterling, J., Pennebaker, J.W., Boyd, R.L.: Examining long-term trends in politics and culture through language of political leaders and cultural institutions. Proceedings of the National Academy of Sciences 116(9), 3476–3481 (2019) https://doi.org/10.1073/pnas.1811987116 https://www.pnas.org/doi/pdf/10.1073/pnas.1811987116 Zhong [2016] Zhong, W.: The candidates in their own words: A textual analysis of 2016 president primary debates. (2016). https://www.aei.org/wp-content/uploads/2016/04/The-candidates-in-their-own-words.pdf Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training. OpenAI (2018) Kayam [2018] Kayam, O.: The readability and simplicity of donald trump’s language. Political Studies Review 16(1), 73–88 (2018) https://doi.org/10.1177/1478929917706844 https://doi.org/10.1177/1478929917706844 Frischling [2019] Frischling, B.: Not “stable genius” again, or please stop making US run this analysis (2019). https://blog.factba.se/2019/10/03/not-stable-genius-again-or-please-stop-making-us-run-this-analysis/ Woolley and Peters [1999] Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Zhong, W.: The candidates in their own words: A textual analysis of 2016 president primary debates. (2016). https://www.aei.org/wp-content/uploads/2016/04/The-candidates-in-their-own-words.pdf Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training. OpenAI (2018) Kayam [2018] Kayam, O.: The readability and simplicity of donald trump’s language. Political Studies Review 16(1), 73–88 (2018) https://doi.org/10.1177/1478929917706844 https://doi.org/10.1177/1478929917706844 Frischling [2019] Frischling, B.: Not “stable genius” again, or please stop making US run this analysis (2019). https://blog.factba.se/2019/10/03/not-stable-genius-again-or-please-stop-making-us-run-this-analysis/ Woolley and Peters [1999] Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training. OpenAI (2018) Kayam [2018] Kayam, O.: The readability and simplicity of donald trump’s language. Political Studies Review 16(1), 73–88 (2018) https://doi.org/10.1177/1478929917706844 https://doi.org/10.1177/1478929917706844 Frischling [2019] Frischling, B.: Not “stable genius” again, or please stop making US run this analysis (2019). https://blog.factba.se/2019/10/03/not-stable-genius-again-or-please-stop-making-us-run-this-analysis/ Woolley and Peters [1999] Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Kayam, O.: The readability and simplicity of donald trump’s language. Political Studies Review 16(1), 73–88 (2018) https://doi.org/10.1177/1478929917706844 https://doi.org/10.1177/1478929917706844 Frischling [2019] Frischling, B.: Not “stable genius” again, or please stop making US run this analysis (2019). https://blog.factba.se/2019/10/03/not-stable-genius-again-or-please-stop-making-us-run-this-analysis/ Woolley and Peters [1999] Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Frischling, B.: Not “stable genius” again, or please stop making US run this analysis (2019). https://blog.factba.se/2019/10/03/not-stable-genius-again-or-please-stop-making-us-run-this-analysis/ Woolley and Peters [1999] Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162
  9. Zhong, W.: The candidates in their own words: A textual analysis of 2016 president primary debates. (2016). https://www.aei.org/wp-content/uploads/2016/04/The-candidates-in-their-own-words.pdf Radford et al. [2018] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training. OpenAI (2018) Kayam [2018] Kayam, O.: The readability and simplicity of donald trump’s language. Political Studies Review 16(1), 73–88 (2018) https://doi.org/10.1177/1478929917706844 https://doi.org/10.1177/1478929917706844 Frischling [2019] Frischling, B.: Not “stable genius” again, or please stop making US run this analysis (2019). https://blog.factba.se/2019/10/03/not-stable-genius-again-or-please-stop-making-us-run-this-analysis/ Woolley and Peters [1999] Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training. OpenAI (2018) Kayam [2018] Kayam, O.: The readability and simplicity of donald trump’s language. Political Studies Review 16(1), 73–88 (2018) https://doi.org/10.1177/1478929917706844 https://doi.org/10.1177/1478929917706844 Frischling [2019] Frischling, B.: Not “stable genius” again, or please stop making US run this analysis (2019). https://blog.factba.se/2019/10/03/not-stable-genius-again-or-please-stop-making-us-run-this-analysis/ Woolley and Peters [1999] Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Kayam, O.: The readability and simplicity of donald trump’s language. Political Studies Review 16(1), 73–88 (2018) https://doi.org/10.1177/1478929917706844 https://doi.org/10.1177/1478929917706844 Frischling [2019] Frischling, B.: Not “stable genius” again, or please stop making US run this analysis (2019). https://blog.factba.se/2019/10/03/not-stable-genius-again-or-please-stop-making-us-run-this-analysis/ Woolley and Peters [1999] Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Frischling, B.: Not “stable genius” again, or please stop making US run this analysis (2019). https://blog.factba.se/2019/10/03/not-stable-genius-again-or-please-stop-making-us-run-this-analysis/ Woolley and Peters [1999] Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162
  10. Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training. OpenAI (2018) Kayam [2018] Kayam, O.: The readability and simplicity of donald trump’s language. Political Studies Review 16(1), 73–88 (2018) https://doi.org/10.1177/1478929917706844 https://doi.org/10.1177/1478929917706844 Frischling [2019] Frischling, B.: Not “stable genius” again, or please stop making US run this analysis (2019). https://blog.factba.se/2019/10/03/not-stable-genius-again-or-please-stop-making-us-run-this-analysis/ Woolley and Peters [1999] Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Kayam, O.: The readability and simplicity of donald trump’s language. Political Studies Review 16(1), 73–88 (2018) https://doi.org/10.1177/1478929917706844 https://doi.org/10.1177/1478929917706844 Frischling [2019] Frischling, B.: Not “stable genius” again, or please stop making US run this analysis (2019). https://blog.factba.se/2019/10/03/not-stable-genius-again-or-please-stop-making-us-run-this-analysis/ Woolley and Peters [1999] Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Frischling, B.: Not “stable genius” again, or please stop making US run this analysis (2019). https://blog.factba.se/2019/10/03/not-stable-genius-again-or-please-stop-making-us-run-this-analysis/ Woolley and Peters [1999] Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162
  11. Kayam, O.: The readability and simplicity of donald trump’s language. Political Studies Review 16(1), 73–88 (2018) https://doi.org/10.1177/1478929917706844 https://doi.org/10.1177/1478929917706844 Frischling [2019] Frischling, B.: Not “stable genius” again, or please stop making US run this analysis (2019). https://blog.factba.se/2019/10/03/not-stable-genius-again-or-please-stop-making-us-run-this-analysis/ Woolley and Peters [1999] Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Frischling, B.: Not “stable genius” again, or please stop making US run this analysis (2019). https://blog.factba.se/2019/10/03/not-stable-genius-again-or-please-stop-making-us-run-this-analysis/ Woolley and Peters [1999] Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162
  12. Frischling, B.: Not “stable genius” again, or please stop making US run this analysis (2019). https://blog.factba.se/2019/10/03/not-stable-genius-again-or-please-stop-making-us-run-this-analysis/ Woolley and Peters [1999] Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162
  13. Woolley, J.T., Peters, G.: The american presidency project. Santa Barbara, CA. Available from World Wide Web: http://www. presidency. ucsb. edu/ws (1999) Melis et al. [2018] Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162
  14. Melis, G., Dyer, C., Blunsom, P.: On the state of the art of evaluation in neural language models. (2018) Chen et al. [1998] Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162
  15. Chen, S.F., Beeferman, D., Rosenfeld, R.: Evaluation metrics for language models. Carnegie Mellon University (1998) Liang et al. [2022] Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162
  16. Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al.: Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022) Simchon et al. [2022] Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162
  17. Simchon, A., Brady, W.J., Van Bavel, J.J.: Troll and divide: the language of online polarization. PNAS Nexus 1(1), 019 (2022) https://doi.org/10.1093/pnasnexus/pgac019 https://academic.oup.com/pnasnexus/article-pdf/1/1/pgac019/47087061/pgac019.pdf Sap et al. [2021] Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162
  18. Sap, M., Swayamdipta, S., Vianna, L., Zhou, X., Choi, Y., Smith, N.A.: Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In: North American Chapter of the Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:244117167 Wulczyn et al. [2017] Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162
  19. Wulczyn, E., Thain, N., Dixon, L.: Ex machina: Personal attacks seen at scale. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1391–1399 (2017) Monroe et al. [2009] Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162
  20. Monroe, B., Colaresi, M., Quinn, K.: Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16 (2009) https://doi.org/10.1093/pan/mpn018 Lewandowsky et al. [2020] Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162
  21. Lewandowsky, S., Jetter, M., Ecker, U.K.H.: Using the president’s tweets to understand political diversion in the age of social media. Nature Communications 11(1), 5764 (2020) https://doi.org/10.1038/s41467-020-19644-6 Kurtzleben [2023] Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162
  22. Kurtzleben, D.: Why Trump’s authoritarian language about “vermin” matters. NPR (2023). https://www.npr.org/2023/11/17/1213746885/trump-vermin-hitler-immigration-authoritarian-republican-primary Radford et al. [2019] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019) Falcon and The PyTorch Lightning team [2019] Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162
  23. Falcon, W., The PyTorch Lightning team: PyTorch Lightning. https://doi.org/10.5281/zenodo.3828935 . https://github.com/Lightning-AI/lightning Řehůřek and Sojka [2010] Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162
  24. Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, Malta (2010) Pennington et al. [2014] Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162 Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162
  25. Pennington, J., Socher, R., Manning, C.: GloVe: Global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar (2014). https://doi.org/10.3115/v1/D14-1162 . https://aclanthology.org/D14-1162
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Karen Zhou (2 papers)
  2. Alexander A. Meitus (1 paper)
  3. Milo Chase (1 paper)
  4. Grace Wang (7 papers)
  5. Anne Mykland (2 papers)
  6. William Howell (1 paper)
  7. Chenhao Tan (89 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com
Youtube Logo Streamline Icon: https://streamlinehq.com